Feb 9 19:45:31.779687 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:45:31.779709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:31.779722 kernel: BIOS-provided physical RAM map: Feb 9 19:45:31.779729 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:45:31.779737 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:45:31.779744 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:45:31.779753 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:45:31.779761 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:45:31.779768 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:45:31.779777 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:45:31.779785 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:45:31.779792 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:45:31.779800 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:45:31.779821 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:45:31.779830 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:45:31.779840 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:45:31.779848 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:45:31.779856 kernel: NX (Execute Disable) protection: active Feb 9 19:45:31.779864 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:45:31.779872 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:45:31.779880 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:45:31.779888 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:45:31.779896 kernel: extended physical RAM map: Feb 9 19:45:31.779904 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:45:31.779912 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:45:31.779922 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:45:31.779930 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:45:31.779938 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:45:31.779946 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:45:31.779954 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:45:31.779962 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 19:45:31.779970 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 19:45:31.779978 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 19:45:31.779986 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 19:45:31.779994 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 19:45:31.780002 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:45:31.780011 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:45:31.780019 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:45:31.780027 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:45:31.780036 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:45:31.780048 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:45:31.780056 kernel: efi: EFI v2.70 by EDK II Feb 9 19:45:31.780065 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:45:31.780075 kernel: random: crng init done Feb 9 19:45:31.780084 kernel: SMBIOS 2.8 present. Feb 9 19:45:31.780092 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:45:31.780101 kernel: Hypervisor detected: KVM Feb 9 19:45:31.780109 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:45:31.780118 kernel: kvm-clock: cpu 0, msr 53faa001, primary cpu clock Feb 9 19:45:31.780127 kernel: kvm-clock: using sched offset of 3956703123 cycles Feb 9 19:45:31.780136 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:45:31.780145 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:45:31.780178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:45:31.780187 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:45:31.780196 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:45:31.780205 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:45:31.780214 kernel: Using GB pages for direct mapping Feb 9 19:45:31.780223 kernel: Secure boot disabled Feb 9 19:45:31.780232 kernel: ACPI: Early table checksum verification disabled Feb 9 19:45:31.780240 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:45:31.780249 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:45:31.780260 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:31.780269 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:31.780278 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:45:31.780287 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:31.780296 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:31.780313 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:31.780322 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:45:31.780331 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:45:31.780340 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:45:31.780350 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:45:31.780359 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:45:31.780367 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:45:31.780377 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:45:31.780386 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:45:31.780394 kernel: No NUMA configuration found Feb 9 19:45:31.780404 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:45:31.780413 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:45:31.780422 kernel: Zone ranges: Feb 9 19:45:31.780433 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:45:31.780442 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:45:31.780451 kernel: Normal empty Feb 9 19:45:31.780460 kernel: Movable zone start for each node Feb 9 19:45:31.780469 kernel: Early memory node ranges Feb 9 19:45:31.780478 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:45:31.780487 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:45:31.780496 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:45:31.780504 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:45:31.780515 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:45:31.780524 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:45:31.780533 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:45:31.780542 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:45:31.780551 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:45:31.780560 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:45:31.780569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:45:31.780578 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:45:31.780587 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:45:31.780598 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:45:31.780607 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:45:31.780616 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:45:31.780625 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:45:31.780634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:45:31.780643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:45:31.780652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:45:31.780661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:45:31.780670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:45:31.780681 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:45:31.780690 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:45:31.780699 kernel: TSC deadline timer available Feb 9 19:45:31.780707 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:45:31.780716 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:45:31.780725 kernel: kvm-guest: setup PV sched yield Feb 9 19:45:31.780734 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:45:31.780743 kernel: Booting paravirtualized kernel on KVM Feb 9 19:45:31.780752 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:45:31.780762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:45:31.780772 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:45:31.780781 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:45:31.780796 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:45:31.780817 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:45:31.780826 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 19:45:31.780836 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:45:31.780845 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:45:31.780854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:45:31.780864 kernel: Policy zone: DMA32 Feb 9 19:45:31.780874 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:31.780885 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:45:31.780897 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:45:31.780906 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:45:31.780915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:45:31.780925 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 19:45:31.780935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:45:31.780946 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:45:31.780955 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:45:31.780964 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:45:31.780974 kernel: rcu: RCU event tracing is enabled. Feb 9 19:45:31.780984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:45:31.780993 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:45:31.781003 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:45:31.781012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:45:31.781021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:45:31.781032 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:45:31.781041 kernel: Console: colour dummy device 80x25 Feb 9 19:45:31.781051 kernel: printk: console [ttyS0] enabled Feb 9 19:45:31.781060 kernel: ACPI: Core revision 20210730 Feb 9 19:45:31.781069 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:45:31.781079 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:45:31.781088 kernel: x2apic enabled Feb 9 19:45:31.781097 kernel: Switched APIC routing to physical x2apic. Feb 9 19:45:31.781107 kernel: kvm-guest: setup PV IPIs Feb 9 19:45:31.781118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:45:31.781127 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:45:31.781137 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:45:31.781146 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:45:31.781155 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:45:31.781164 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:45:31.781174 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:45:31.781183 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:45:31.781192 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:45:31.781204 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:45:31.781213 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:45:31.781222 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:45:31.781232 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:45:31.781241 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:45:31.781251 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:45:31.781260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:45:31.781269 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:45:31.781280 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:45:31.781290 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:45:31.781299 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:45:31.781316 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:45:31.781326 kernel: LSM: Security Framework initializing Feb 9 19:45:31.781335 kernel: SELinux: Initializing. Feb 9 19:45:31.781345 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:45:31.781354 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:45:31.781364 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:45:31.781376 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:45:31.781385 kernel: ... version: 0 Feb 9 19:45:31.781394 kernel: ... bit width: 48 Feb 9 19:45:31.781403 kernel: ... generic registers: 6 Feb 9 19:45:31.781413 kernel: ... value mask: 0000ffffffffffff Feb 9 19:45:31.781422 kernel: ... max period: 00007fffffffffff Feb 9 19:45:31.781431 kernel: ... fixed-purpose events: 0 Feb 9 19:45:31.781441 kernel: ... event mask: 000000000000003f Feb 9 19:45:31.781450 kernel: signal: max sigframe size: 1776 Feb 9 19:45:31.781459 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:45:31.781471 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:45:31.781480 kernel: x86: Booting SMP configuration: Feb 9 19:45:31.781489 kernel: .... node #0, CPUs: #1 Feb 9 19:45:31.781499 kernel: kvm-clock: cpu 1, msr 53faa041, secondary cpu clock Feb 9 19:45:31.781508 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:45:31.781517 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 19:45:31.781526 kernel: #2 Feb 9 19:45:31.781536 kernel: kvm-clock: cpu 2, msr 53faa081, secondary cpu clock Feb 9 19:45:31.781545 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:45:31.781557 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 19:45:31.781566 kernel: #3 Feb 9 19:45:31.781575 kernel: kvm-clock: cpu 3, msr 53faa0c1, secondary cpu clock Feb 9 19:45:31.781585 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:45:31.781594 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 19:45:31.781603 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:45:31.781613 kernel: smpboot: Max logical packages: 1 Feb 9 19:45:31.781622 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:45:31.781631 kernel: devtmpfs: initialized Feb 9 19:45:31.781642 kernel: x86/mm: Memory block size: 128MB Feb 9 19:45:31.781652 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:45:31.781661 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:45:31.781671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:45:31.781681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:45:31.781690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:45:31.781709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:45:31.781719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:45:31.781729 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:45:31.781740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:45:31.781750 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:45:31.781759 kernel: audit: type=2000 audit(1707507930.206:1): state=initialized audit_enabled=0 res=1 Feb 9 19:45:31.781769 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:45:31.781778 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:45:31.781787 kernel: cpuidle: using governor menu Feb 9 19:45:31.781796 kernel: ACPI: bus type PCI registered Feb 9 19:45:31.781816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:45:31.781826 kernel: dca service started, version 1.12.1 Feb 9 19:45:31.781838 kernel: PCI: Using configuration type 1 for base access Feb 9 19:45:31.781847 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:45:31.781857 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:45:31.781866 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:45:31.781875 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:45:31.781885 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:45:31.781894 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:45:31.781903 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:45:31.781912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:45:31.781923 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:45:31.781932 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:45:31.781942 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:45:31.781951 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:45:31.781960 kernel: ACPI: Interpreter enabled Feb 9 19:45:31.781969 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:45:31.781979 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:45:31.781988 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:45:31.781997 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:45:31.782009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:45:31.782152 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:45:31.782168 kernel: acpiphp: Slot [3] registered Feb 9 19:45:31.782178 kernel: acpiphp: Slot [4] registered Feb 9 19:45:31.782187 kernel: acpiphp: Slot [5] registered Feb 9 19:45:31.782196 kernel: acpiphp: Slot [6] registered Feb 9 19:45:31.782206 kernel: acpiphp: Slot [7] registered Feb 9 19:45:31.782215 kernel: acpiphp: Slot [8] registered Feb 9 19:45:31.782224 kernel: acpiphp: Slot [9] registered Feb 9 19:45:31.782236 kernel: acpiphp: Slot [10] registered Feb 9 19:45:31.782245 kernel: acpiphp: Slot [11] registered Feb 9 19:45:31.782254 kernel: acpiphp: Slot [12] registered Feb 9 19:45:31.782263 kernel: acpiphp: Slot [13] registered Feb 9 19:45:31.782272 kernel: acpiphp: Slot [14] registered Feb 9 19:45:31.782281 kernel: acpiphp: Slot [15] registered Feb 9 19:45:31.782290 kernel: acpiphp: Slot [16] registered Feb 9 19:45:31.782300 kernel: acpiphp: Slot [17] registered Feb 9 19:45:31.782317 kernel: acpiphp: Slot [18] registered Feb 9 19:45:31.782329 kernel: acpiphp: Slot [19] registered Feb 9 19:45:31.782338 kernel: acpiphp: Slot [20] registered Feb 9 19:45:31.782347 kernel: acpiphp: Slot [21] registered Feb 9 19:45:31.782357 kernel: acpiphp: Slot [22] registered Feb 9 19:45:31.782366 kernel: acpiphp: Slot [23] registered Feb 9 19:45:31.782375 kernel: acpiphp: Slot [24] registered Feb 9 19:45:31.782384 kernel: acpiphp: Slot [25] registered Feb 9 19:45:31.782393 kernel: acpiphp: Slot [26] registered Feb 9 19:45:31.782403 kernel: acpiphp: Slot [27] registered Feb 9 19:45:31.782414 kernel: acpiphp: Slot [28] registered Feb 9 19:45:31.782423 kernel: acpiphp: Slot [29] registered Feb 9 19:45:31.782432 kernel: acpiphp: Slot [30] registered Feb 9 19:45:31.782441 kernel: acpiphp: Slot [31] registered Feb 9 19:45:31.782451 kernel: PCI host bridge to bus 0000:00 Feb 9 19:45:31.782551 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:45:31.782633 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:45:31.782714 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:45:31.782842 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:45:31.782928 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:45:31.783006 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:45:31.783139 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:45:31.783241 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:45:31.783352 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:45:31.783727 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:45:31.783844 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:45:31.783933 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:45:31.784022 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:45:31.784202 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:45:31.784316 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:45:31.784410 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:45:31.784507 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:45:31.784607 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:45:31.784697 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:45:31.784788 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:45:31.784895 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:45:31.784984 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:45:31.785072 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:45:31.785173 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:45:31.785266 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:45:31.785381 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:45:31.785472 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:45:31.785569 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:45:31.785663 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:45:31.785754 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:45:31.786054 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:45:31.786201 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:45:31.786318 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:45:31.786439 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:45:31.786562 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:45:31.786682 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:45:31.786695 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:45:31.786708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:45:31.786718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:45:31.786727 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:45:31.786749 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:45:31.786759 kernel: iommu: Default domain type: Translated Feb 9 19:45:31.786768 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:45:31.790648 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:45:31.790781 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:45:31.790876 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:45:31.790902 kernel: vgaarb: loaded Feb 9 19:45:31.790910 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:45:31.790917 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:45:31.790925 kernel: PTP clock support registered Feb 9 19:45:31.790933 kernel: Registered efivars operations Feb 9 19:45:31.790941 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:45:31.790957 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:45:31.790965 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:45:31.790973 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:45:31.790981 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 19:45:31.790988 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 19:45:31.790996 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:45:31.791003 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:45:31.791020 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:45:31.791028 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:45:31.791035 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:45:31.791042 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:45:31.791050 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:45:31.791066 kernel: pnp: PnP ACPI init Feb 9 19:45:31.791157 kernel: pnp 00:02: [dma 2] Feb 9 19:45:31.791180 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:45:31.791188 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:45:31.791195 kernel: NET: Registered PF_INET protocol family Feb 9 19:45:31.791202 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:45:31.791219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:45:31.791227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:45:31.791236 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:45:31.791244 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:45:31.791251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:45:31.791259 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:45:31.791266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:45:31.791273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:45:31.791281 kernel: NET: Registered PF_XDP protocol family Feb 9 19:45:31.791370 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:45:31.791456 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:45:31.791526 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:45:31.791587 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:45:31.791649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:45:31.791724 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:45:31.793357 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:45:31.793441 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:45:31.793515 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:45:31.793588 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:45:31.793598 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:45:31.793606 kernel: Initialise system trusted keyrings Feb 9 19:45:31.793613 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:45:31.793621 kernel: Key type asymmetric registered Feb 9 19:45:31.793628 kernel: Asymmetric key parser 'x509' registered Feb 9 19:45:31.793635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:45:31.793643 kernel: io scheduler mq-deadline registered Feb 9 19:45:31.793651 kernel: io scheduler kyber registered Feb 9 19:45:31.793660 kernel: io scheduler bfq registered Feb 9 19:45:31.793667 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:45:31.793675 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:45:31.793683 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:45:31.793691 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:45:31.793698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:45:31.793706 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:45:31.793713 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:45:31.793721 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:45:31.793729 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:45:31.793737 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:45:31.793842 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:45:31.793924 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:45:31.793991 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:45:31 UTC (1707507931) Feb 9 19:45:31.794055 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:45:31.794064 kernel: efifb: probing for efifb Feb 9 19:45:31.794072 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:45:31.794079 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:45:31.794087 kernel: efifb: scrolling: redraw Feb 9 19:45:31.794094 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:45:31.794102 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:45:31.794109 kernel: fb0: EFI VGA frame buffer device Feb 9 19:45:31.794119 kernel: pstore: Registered efi as persistent store backend Feb 9 19:45:31.794126 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:45:31.794133 kernel: Segment Routing with IPv6 Feb 9 19:45:31.794140 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:45:31.794148 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:45:31.794155 kernel: Key type dns_resolver registered Feb 9 19:45:31.794163 kernel: IPI shorthand broadcast: enabled Feb 9 19:45:31.794170 kernel: sched_clock: Marking stable (382238592, 105407860)->(495928636, -8282184) Feb 9 19:45:31.794178 kernel: registered taskstats version 1 Feb 9 19:45:31.794185 kernel: Loading compiled-in X.509 certificates Feb 9 19:45:31.794194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:45:31.794201 kernel: Key type .fscrypt registered Feb 9 19:45:31.794208 kernel: Key type fscrypt-provisioning registered Feb 9 19:45:31.794216 kernel: pstore: Using crash dump compression: deflate Feb 9 19:45:31.794223 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:45:31.794230 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:45:31.794238 kernel: ima: No architecture policies found Feb 9 19:45:31.794245 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:45:31.794254 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:45:31.794263 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:45:31.794270 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:45:31.794278 kernel: Run /init as init process Feb 9 19:45:31.794285 kernel: with arguments: Feb 9 19:45:31.794292 kernel: /init Feb 9 19:45:31.794299 kernel: with environment: Feb 9 19:45:31.794316 kernel: HOME=/ Feb 9 19:45:31.794324 kernel: TERM=linux Feb 9 19:45:31.794331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:45:31.794344 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:45:31.794354 systemd[1]: Detected virtualization kvm. Feb 9 19:45:31.794362 systemd[1]: Detected architecture x86-64. Feb 9 19:45:31.794370 systemd[1]: Running in initrd. Feb 9 19:45:31.794378 systemd[1]: No hostname configured, using default hostname. Feb 9 19:45:31.794385 systemd[1]: Hostname set to . Feb 9 19:45:31.794393 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:45:31.794402 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:45:31.794410 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:45:31.794418 systemd[1]: Reached target cryptsetup.target. Feb 9 19:45:31.794426 systemd[1]: Reached target paths.target. Feb 9 19:45:31.794433 systemd[1]: Reached target slices.target. Feb 9 19:45:31.794441 systemd[1]: Reached target swap.target. Feb 9 19:45:31.794449 systemd[1]: Reached target timers.target. Feb 9 19:45:31.794458 systemd[1]: Listening on iscsid.socket. Feb 9 19:45:31.794466 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:45:31.794474 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:45:31.794482 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:45:31.794490 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:45:31.794498 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:45:31.794505 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:45:31.794513 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:45:31.794521 systemd[1]: Reached target sockets.target. Feb 9 19:45:31.794530 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:45:31.794538 systemd[1]: Finished network-cleanup.service. Feb 9 19:45:31.794546 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:45:31.794554 systemd[1]: Starting systemd-journald.service... Feb 9 19:45:31.794562 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:45:31.794569 systemd[1]: Starting systemd-resolved.service... Feb 9 19:45:31.794577 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:45:31.794585 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:45:31.794592 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:45:31.794601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:45:31.794609 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:45:31.794618 kernel: audit: type=1130 audit(1707507931.779:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.794626 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:45:31.794634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:45:31.794642 kernel: audit: type=1130 audit(1707507931.787:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.794653 systemd-journald[198]: Journal started Feb 9 19:45:31.794700 systemd-journald[198]: Runtime Journal (/run/log/journal/8b6b666203754770a58d6ab845309d81) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:45:31.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.787758 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 19:45:31.797043 systemd[1]: Started systemd-journald.service. Feb 9 19:45:31.797066 kernel: audit: type=1130 audit(1707507931.795:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.797026 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:45:31.798790 systemd-resolved[200]: Positive Trust Anchors: Feb 9 19:45:31.803108 kernel: audit: type=1130 audit(1707507931.798:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.798797 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:45:31.798835 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:45:31.810923 kernel: audit: type=1130 audit(1707507931.803:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.810976 dracut-cmdline[215]: dracut-dracut-053 Feb 9 19:45:31.810976 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:45:31.810976 dracut-cmdline[215]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:31.799974 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:45:31.803345 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 19:45:31.818179 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:45:31.803986 systemd[1]: Started systemd-resolved.service. Feb 9 19:45:31.804727 systemd[1]: Reached target nss-lookup.target. Feb 9 19:45:31.820231 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 19:45:31.820922 kernel: Bridge firewalling registered Feb 9 19:45:31.838830 kernel: SCSI subsystem initialized Feb 9 19:45:31.852614 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:45:31.852653 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:45:31.852664 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:45:31.855938 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 19:45:31.856633 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:45:31.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.857652 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:45:31.860654 kernel: audit: type=1130 audit(1707507931.856:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.862833 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:45:31.865789 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:45:31.868899 kernel: audit: type=1130 audit(1707507931.865:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.875825 kernel: iscsi: registered transport (tcp) Feb 9 19:45:31.894142 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:45:31.894182 kernel: QLogic iSCSI HBA Driver Feb 9 19:45:31.927675 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:45:31.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.929047 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:45:31.932221 kernel: audit: type=1130 audit(1707507931.927:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:31.976858 kernel: raid6: avx2x4 gen() 30279 MB/s Feb 9 19:45:31.993893 kernel: raid6: avx2x4 xor() 8101 MB/s Feb 9 19:45:32.010833 kernel: raid6: avx2x2 gen() 32598 MB/s Feb 9 19:45:32.027829 kernel: raid6: avx2x2 xor() 19303 MB/s Feb 9 19:45:32.044830 kernel: raid6: avx2x1 gen() 26550 MB/s Feb 9 19:45:32.061848 kernel: raid6: avx2x1 xor() 15399 MB/s Feb 9 19:45:32.078849 kernel: raid6: sse2x4 gen() 14724 MB/s Feb 9 19:45:32.095856 kernel: raid6: sse2x4 xor() 7426 MB/s Feb 9 19:45:32.112847 kernel: raid6: sse2x2 gen() 16458 MB/s Feb 9 19:45:32.129849 kernel: raid6: sse2x2 xor() 9873 MB/s Feb 9 19:45:32.146863 kernel: raid6: sse2x1 gen() 12413 MB/s Feb 9 19:45:32.164362 kernel: raid6: sse2x1 xor() 7829 MB/s Feb 9 19:45:32.164434 kernel: raid6: using algorithm avx2x2 gen() 32598 MB/s Feb 9 19:45:32.164444 kernel: raid6: .... xor() 19303 MB/s, rmw enabled Feb 9 19:45:32.164453 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:45:32.175835 kernel: xor: automatically using best checksumming function avx Feb 9 19:45:32.261837 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:45:32.269556 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:45:32.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:32.272823 kernel: audit: type=1130 audit(1707507932.270:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:32.272000 audit: BPF prog-id=7 op=LOAD Feb 9 19:45:32.272000 audit: BPF prog-id=8 op=LOAD Feb 9 19:45:32.273138 systemd[1]: Starting systemd-udevd.service... Feb 9 19:45:32.283821 systemd-udevd[399]: Using default interface naming scheme 'v252'. Feb 9 19:45:32.287463 systemd[1]: Started systemd-udevd.service. Feb 9 19:45:32.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:32.288701 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:45:32.298345 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 9 19:45:32.319562 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:45:32.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:32.321437 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:45:32.354565 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:45:32.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:32.376145 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:45:32.378031 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:45:32.378056 kernel: GPT:9289727 != 19775487 Feb 9 19:45:32.378066 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:45:32.378979 kernel: GPT:9289727 != 19775487 Feb 9 19:45:32.378999 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:45:32.379989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:32.384829 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:45:32.395834 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:45:32.395875 kernel: libata version 3.00 loaded. Feb 9 19:45:32.395886 kernel: AES CTR mode by8 optimization enabled Feb 9 19:45:32.401820 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:45:32.402823 kernel: scsi host0: ata_piix Feb 9 19:45:32.408232 kernel: scsi host1: ata_piix Feb 9 19:45:32.408366 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:45:32.408377 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:45:32.415819 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Feb 9 19:45:32.417884 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:45:32.423022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:45:32.423342 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:45:32.428872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:45:32.432110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:45:32.433106 systemd[1]: Starting disk-uuid.service... Feb 9 19:45:32.439422 disk-uuid[514]: Primary Header is updated. Feb 9 19:45:32.439422 disk-uuid[514]: Secondary Entries is updated. Feb 9 19:45:32.439422 disk-uuid[514]: Secondary Header is updated. Feb 9 19:45:32.442825 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:32.445820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:32.567858 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:45:32.567918 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:45:32.597829 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:45:32.597995 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:45:32.614827 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:45:33.445802 disk-uuid[515]: The operation has completed successfully. Feb 9 19:45:33.446556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:33.473780 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:45:33.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.473869 systemd[1]: Finished disk-uuid.service. Feb 9 19:45:33.475329 systemd[1]: Starting verity-setup.service... Feb 9 19:45:33.487825 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:45:33.504980 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:45:33.506392 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:45:33.508390 systemd[1]: Finished verity-setup.service. Feb 9 19:45:33.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.564683 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:45:33.565792 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:45:33.565200 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:45:33.565948 systemd[1]: Starting ignition-setup.service... Feb 9 19:45:33.568043 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:45:33.574079 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:45:33.574131 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:45:33.574141 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:45:33.582288 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:45:33.589386 systemd[1]: Finished ignition-setup.service. Feb 9 19:45:33.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.591474 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:45:33.625455 ignition[618]: Ignition 2.14.0 Feb 9 19:45:33.625871 ignition[618]: Stage: fetch-offline Feb 9 19:45:33.625913 ignition[618]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:33.625920 ignition[618]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:33.626007 ignition[618]: parsed url from cmdline: "" Feb 9 19:45:33.626010 ignition[618]: no config URL provided Feb 9 19:45:33.626014 ignition[618]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:45:33.626020 ignition[618]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:45:33.626037 ignition[618]: op(1): [started] loading QEMU firmware config module Feb 9 19:45:33.626043 ignition[618]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:45:33.629254 ignition[618]: op(1): [finished] loading QEMU firmware config module Feb 9 19:45:33.639647 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:45:33.641752 systemd[1]: Starting systemd-networkd.service... Feb 9 19:45:33.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.640000 audit: BPF prog-id=9 op=LOAD Feb 9 19:45:33.644287 ignition[618]: parsing config with SHA512: fde28d8e3affa138d4b94df37c2f025caad3ee07b1a43f6a2ca8ac5acb0e9547ca2d51d37a6d81b88f5f9977a3d6593f77278dd5fcc79487de35586f742fe8db Feb 9 19:45:33.662023 unknown[618]: fetched base config from "system" Feb 9 19:45:33.662745 unknown[618]: fetched user config from "qemu" Feb 9 19:45:33.663421 ignition[618]: fetch-offline: fetch-offline passed Feb 9 19:45:33.663471 ignition[618]: Ignition finished successfully Feb 9 19:45:33.665884 systemd-networkd[707]: lo: Link UP Feb 9 19:45:33.665893 systemd-networkd[707]: lo: Gained carrier Feb 9 19:45:33.666278 systemd-networkd[707]: Enumeration completed Feb 9 19:45:33.666542 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:45:33.667417 systemd-networkd[707]: eth0: Link UP Feb 9 19:45:33.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.667421 systemd-networkd[707]: eth0: Gained carrier Feb 9 19:45:33.668950 systemd[1]: Started systemd-networkd.service. Feb 9 19:45:33.669726 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:45:33.670664 systemd[1]: Reached target network.target. Feb 9 19:45:33.671861 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:45:33.672537 systemd[1]: Starting ignition-kargs.service... Feb 9 19:45:33.674334 systemd[1]: Starting iscsiuio.service... Feb 9 19:45:33.678472 systemd[1]: Started iscsiuio.service. Feb 9 19:45:33.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.679963 systemd[1]: Starting iscsid.service... Feb 9 19:45:33.681874 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:45:33.682401 ignition[710]: Ignition 2.14.0 Feb 9 19:45:33.684210 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:45:33.684210 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:45:33.684210 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:45:33.684210 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:45:33.684210 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:45:33.684210 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:45:33.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.682406 ignition[710]: Stage: kargs Feb 9 19:45:33.684538 systemd[1]: Finished ignition-kargs.service. Feb 9 19:45:33.682497 ignition[710]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:33.686153 systemd[1]: Started iscsid.service. Feb 9 19:45:33.682505 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:33.691331 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:45:33.683449 ignition[710]: kargs: kargs passed Feb 9 19:45:33.692651 systemd[1]: Starting ignition-disks.service... Feb 9 19:45:33.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.683488 ignition[710]: Ignition finished successfully Feb 9 19:45:33.701352 systemd[1]: Finished ignition-disks.service. Feb 9 19:45:33.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.698666 ignition[721]: Ignition 2.14.0 Feb 9 19:45:33.702170 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:45:33.698673 ignition[721]: Stage: disks Feb 9 19:45:33.703909 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:45:33.698972 ignition[721]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:33.704947 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:45:33.698982 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:33.699894 ignition[721]: disks: disks passed Feb 9 19:45:33.699929 ignition[721]: Ignition finished successfully Feb 9 19:45:33.709115 systemd[1]: Reached target local-fs.target. Feb 9 19:45:33.710158 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:45:33.711288 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:45:33.712458 systemd[1]: Reached target remote-fs.target. Feb 9 19:45:33.713501 systemd[1]: Reached target sysinit.target. Feb 9 19:45:33.714534 systemd[1]: Reached target basic.target. Feb 9 19:45:33.716136 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:45:33.723526 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:45:33.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.725324 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:45:33.734640 systemd-fsck[742]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:45:33.738961 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:45:33.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.741201 systemd[1]: Mounting sysroot.mount... Feb 9 19:45:33.746846 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:45:33.747170 systemd[1]: Mounted sysroot.mount. Feb 9 19:45:33.748152 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:45:33.749822 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:45:33.750992 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:45:33.751023 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:45:33.751041 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:45:33.754720 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:45:33.756156 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:45:33.759748 initrd-setup-root[752]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:45:33.762283 initrd-setup-root[760]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:45:33.765426 initrd-setup-root[768]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:45:33.767573 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:45:33.789745 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:45:33.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.791482 systemd[1]: Starting ignition-mount.service... Feb 9 19:45:33.793356 systemd[1]: Starting sysroot-boot.service... Feb 9 19:45:33.795890 bash[793]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:45:33.803678 ignition[794]: INFO : Ignition 2.14.0 Feb 9 19:45:33.803678 ignition[794]: INFO : Stage: mount Feb 9 19:45:33.805030 ignition[794]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:33.805030 ignition[794]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:33.807688 ignition[794]: INFO : mount: mount passed Feb 9 19:45:33.808327 ignition[794]: INFO : Ignition finished successfully Feb 9 19:45:33.809545 systemd[1]: Finished ignition-mount.service. Feb 9 19:45:33.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:33.811035 systemd[1]: Finished sysroot-boot.service. Feb 9 19:45:33.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:34.515603 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:45:34.521591 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Feb 9 19:45:34.521617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:45:34.521633 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:45:34.522825 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:45:34.525790 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:45:34.527694 systemd[1]: Starting ignition-files.service... Feb 9 19:45:34.541370 ignition[823]: INFO : Ignition 2.14.0 Feb 9 19:45:34.541370 ignition[823]: INFO : Stage: files Feb 9 19:45:34.542849 ignition[823]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:34.542849 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:34.542849 ignition[823]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:45:34.545251 ignition[823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:45:34.545251 ignition[823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:45:34.548097 ignition[823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:45:34.549114 ignition[823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:45:34.550395 unknown[823]: wrote ssh authorized keys file for user: core Feb 9 19:45:34.551141 ignition[823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:45:34.552402 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:45:34.552402 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:45:34.552402 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:45:34.552402 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:45:34.781997 systemd-networkd[707]: eth0: Gained IPv6LL Feb 9 19:45:34.926409 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:45:35.038024 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:45:35.038024 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:45:35.041369 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:45:35.041369 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:45:35.348146 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:45:35.444459 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:45:35.446434 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:45:35.446434 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:45:35.446434 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:45:35.515512 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:45:35.720591 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:45:35.720591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:45:35.723893 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:45:35.723893 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:45:35.768084 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:45:36.139351 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:45:36.139351 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:45:36.144957 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:45:36.144957 ignition[823]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:45:36.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.162944 systemd[1]: Finished ignition-files.service. Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:45:36.178450 ignition[823]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:45:36.178450 ignition[823]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:45:36.178450 ignition[823]: INFO : files: files passed Feb 9 19:45:36.178450 ignition[823]: INFO : Ignition finished successfully Feb 9 19:45:36.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.165427 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:45:36.166920 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:45:36.202534 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:45:36.167626 systemd[1]: Starting ignition-quench.service... Feb 9 19:45:36.205159 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:45:36.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.171317 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:45:36.171380 systemd[1]: Finished ignition-quench.service. Feb 9 19:45:36.173037 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:45:36.174929 systemd[1]: Reached target ignition-complete.target. Feb 9 19:45:36.177425 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:45:36.189690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:45:36.189757 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:45:36.191011 systemd[1]: Reached target initrd-fs.target. Feb 9 19:45:36.192683 systemd[1]: Reached target initrd.target. Feb 9 19:45:36.194246 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:45:36.194767 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:45:36.204211 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:45:36.205652 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:45:36.214621 systemd[1]: Stopped target network.target. Feb 9 19:45:36.215954 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:45:36.224435 kernel: kauditd_printk_skb: 30 callbacks suppressed Feb 9 19:45:36.224461 kernel: audit: type=1131 audit(1707507936.219:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.217121 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:45:36.239983 kernel: audit: type=1131 audit(1707507936.225:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.240003 kernel: audit: type=1131 audit(1707507936.227:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.240016 kernel: audit: type=1131 audit(1707507936.232:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.218321 systemd[1]: Stopped target timers.target. Feb 9 19:45:36.219408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:45:36.219494 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:45:36.220873 systemd[1]: Stopped target initrd.target. Feb 9 19:45:36.224522 systemd[1]: Stopped target basic.target. Feb 9 19:45:36.248071 kernel: audit: type=1131 audit(1707507936.244:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.225109 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:45:36.251372 kernel: audit: type=1131 audit(1707507936.248:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.225550 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:45:36.225833 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:45:36.225986 systemd[1]: Stopped target remote-fs.target. Feb 9 19:45:36.226129 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:45:36.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.226306 systemd[1]: Stopped target sysinit.target. Feb 9 19:45:36.261020 kernel: audit: type=1131 audit(1707507936.255:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.261041 ignition[864]: INFO : Ignition 2.14.0 Feb 9 19:45:36.261041 ignition[864]: INFO : Stage: umount Feb 9 19:45:36.261041 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:36.261041 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:36.261041 ignition[864]: INFO : umount: umount passed Feb 9 19:45:36.261041 ignition[864]: INFO : Ignition finished successfully Feb 9 19:45:36.275652 kernel: audit: type=1131 audit(1707507936.260:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.275677 kernel: audit: type=1131 audit(1707507936.263:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.275692 kernel: audit: type=1131 audit(1707507936.265:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.226459 systemd[1]: Stopped target local-fs.target. Feb 9 19:45:36.277000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:45:36.226605 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:45:36.226779 systemd[1]: Stopped target swap.target. Feb 9 19:45:36.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.227034 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:45:36.227173 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:45:36.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.227578 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:45:36.230043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:45:36.230163 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:45:36.230553 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:45:36.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.230657 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:45:36.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.233005 systemd[1]: Stopped target paths.target. Feb 9 19:45:36.235299 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:45:36.238856 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:45:36.240055 systemd[1]: Stopped target slices.target. Feb 9 19:45:36.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.240679 systemd[1]: Stopped target sockets.target. Feb 9 19:45:36.241724 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:45:36.241795 systemd[1]: Closed iscsid.socket. Feb 9 19:45:36.242872 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:45:36.242934 systemd[1]: Closed iscsiuio.socket. Feb 9 19:45:36.243880 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:45:36.243964 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:45:36.245047 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:45:36.245143 systemd[1]: Stopped ignition-files.service. Feb 9 19:45:36.248864 systemd[1]: Stopping ignition-mount.service... Feb 9 19:45:36.251947 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:45:36.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.252937 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:45:36.253779 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:45:36.254988 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:45:36.255109 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:45:36.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.256639 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:45:36.256745 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:45:36.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.262376 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:45:36.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.262449 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:45:36.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.265019 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:45:36.265078 systemd[1]: Stopped ignition-mount.service. Feb 9 19:45:36.265842 systemd-networkd[707]: eth0: DHCPv6 lease lost Feb 9 19:45:36.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:36.311000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:45:36.266725 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:45:36.266801 systemd[1]: Stopped ignition-disks.service. Feb 9 19:45:36.274308 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:45:36.274339 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:45:36.275673 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:45:36.275702 systemd[1]: Stopped ignition-setup.service. Feb 9 19:45:36.277232 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:45:36.277598 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:45:36.277673 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:45:36.279323 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:45:36.279383 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:45:36.280560 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:45:36.280618 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:45:36.282184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:45:36.282216 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:45:36.283276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:45:36.283306 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:45:36.324000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:45:36.324000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:45:36.284852 systemd[1]: Stopping network-cleanup.service... Feb 9 19:45:36.285529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:45:36.326000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:45:36.326000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:45:36.326000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:45:36.285563 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:45:36.286067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:45:36.286095 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:45:36.286780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:45:36.286819 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:45:36.287939 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:45:36.289259 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:45:36.291738 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:45:36.291819 systemd[1]: Stopped network-cleanup.service. Feb 9 19:45:36.298415 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:45:36.298507 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:45:36.300191 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:45:36.300228 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:45:36.301300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:45:36.301322 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:45:36.302426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:45:36.302455 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:45:36.303676 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:45:36.303704 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:45:36.304779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:45:36.304820 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:45:36.341311 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 9 19:45:36.306507 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:45:36.342110 iscsid[718]: iscsid shutting down. Feb 9 19:45:36.307118 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:45:36.307152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:45:36.308467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:45:36.308497 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:45:36.309143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:45:36.309172 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:45:36.310487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:45:36.311170 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:45:36.311270 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:45:36.312290 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:45:36.314221 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:45:36.320320 systemd[1]: Switching root. Feb 9 19:45:36.349751 systemd-journald[198]: Journal stopped Feb 9 19:45:39.698110 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:45:39.698180 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:45:39.698192 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:45:39.698204 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:45:39.698214 kernel: SELinux: policy capability open_perms=1 Feb 9 19:45:39.698225 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:45:39.698238 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:45:39.698247 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:45:39.698257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:45:39.698267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:45:39.698277 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:45:39.698293 systemd[1]: Successfully loaded SELinux policy in 35.288ms. Feb 9 19:45:39.698317 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.311ms. Feb 9 19:45:39.698329 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:45:39.698340 systemd[1]: Detected virtualization kvm. Feb 9 19:45:39.698351 systemd[1]: Detected architecture x86-64. Feb 9 19:45:39.698362 systemd[1]: Detected first boot. Feb 9 19:45:39.698373 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:45:39.698384 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:45:39.698395 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:45:39.698406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:39.698424 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:39.698436 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:39.698446 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:45:39.698456 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:45:39.698467 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:45:39.698478 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:45:39.698489 systemd[1]: Created slice system-getty.slice. Feb 9 19:45:39.698499 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:45:39.698510 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:45:39.698520 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:45:39.698530 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:45:39.698541 systemd[1]: Created slice user.slice. Feb 9 19:45:39.698551 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:45:39.698561 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:45:39.698571 systemd[1]: Set up automount boot.automount. Feb 9 19:45:39.698583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:45:39.698593 systemd[1]: Reached target integritysetup.target. Feb 9 19:45:39.698603 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:45:39.698614 systemd[1]: Reached target remote-fs.target. Feb 9 19:45:39.698624 systemd[1]: Reached target slices.target. Feb 9 19:45:39.698634 systemd[1]: Reached target swap.target. Feb 9 19:45:39.698645 systemd[1]: Reached target torcx.target. Feb 9 19:45:39.698655 systemd[1]: Reached target veritysetup.target. Feb 9 19:45:39.698667 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:45:39.698677 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:45:39.698688 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:45:39.698702 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:45:39.698713 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:45:39.698723 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:45:39.698734 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:45:39.698744 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:45:39.698755 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:45:39.698765 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:45:39.698777 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:45:39.698788 systemd[1]: Mounting media.mount... Feb 9 19:45:39.698798 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:45:39.698830 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:45:39.698841 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:45:39.698852 systemd[1]: Mounting tmp.mount... Feb 9 19:45:39.698862 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:45:39.698873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:45:39.698884 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:45:39.698896 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:45:39.698906 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:45:39.698916 systemd[1]: Starting modprobe@drm.service... Feb 9 19:45:39.698927 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:45:39.698937 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:45:39.698949 systemd[1]: Starting modprobe@loop.service... Feb 9 19:45:39.698960 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:45:39.698971 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:45:39.698982 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:45:39.698992 systemd[1]: Starting systemd-journald.service... Feb 9 19:45:39.699003 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:45:39.699014 kernel: loop: module loaded Feb 9 19:45:39.699024 kernel: fuse: init (API version 7.34) Feb 9 19:45:39.699034 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:45:39.699045 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:45:39.699055 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:45:39.699066 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:45:39.699077 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:45:39.699088 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:45:39.699098 systemd[1]: Mounted media.mount. Feb 9 19:45:39.699108 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:45:39.699125 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:45:39.699136 systemd[1]: Mounted tmp.mount. Feb 9 19:45:39.699146 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:45:39.699157 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:45:39.699167 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:45:39.699183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:45:39.699195 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:45:39.699208 systemd-journald[1007]: Journal started Feb 9 19:45:39.699245 systemd-journald[1007]: Runtime Journal (/run/log/journal/8b6b666203754770a58d6ab845309d81) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:45:39.587000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:45:39.587000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:45:39.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.695000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:45:39.695000 audit[1007]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff0386d060 a2=4000 a3=7fff0386d0fc items=0 ppid=1 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:39.695000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:45:39.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.700854 systemd[1]: Started systemd-journald.service. Feb 9 19:45:39.701651 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:45:39.702166 systemd[1]: Finished modprobe@drm.service. Feb 9 19:45:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.702951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:45:39.703150 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:45:39.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.703951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:45:39.704084 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:45:39.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.704838 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:45:39.704970 systemd[1]: Finished modprobe@loop.service. Feb 9 19:45:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.705873 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:45:39.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.706779 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:45:39.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.707747 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:45:39.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.708883 systemd[1]: Reached target network-pre.target. Feb 9 19:45:39.710798 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:45:39.712801 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:45:39.713497 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:45:39.715111 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:45:39.718701 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:45:39.719338 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:45:39.720251 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:45:39.720883 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:45:39.721824 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:45:39.726528 systemd-journald[1007]: Time spent on flushing to /var/log/journal/8b6b666203754770a58d6ab845309d81 is 14.488ms for 1102 entries. Feb 9 19:45:39.726528 systemd-journald[1007]: System Journal (/var/log/journal/8b6b666203754770a58d6ab845309d81) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:45:39.756859 systemd-journald[1007]: Received client request to flush runtime journal. Feb 9 19:45:39.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.726137 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:45:39.727759 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:45:39.729657 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:45:39.730498 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:45:39.741415 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:45:39.742504 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:45:39.744399 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:45:39.757913 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:45:39.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.765130 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:45:39.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.767391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:45:39.772064 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:45:39.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:39.773896 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:45:39.781401 udevadm[1061]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:45:39.785112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:45:39.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.140958 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:45:40.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.142830 systemd[1]: Starting systemd-udevd.service... Feb 9 19:45:40.158614 systemd-udevd[1064]: Using default interface naming scheme 'v252'. Feb 9 19:45:40.171024 systemd[1]: Started systemd-udevd.service. Feb 9 19:45:40.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.174416 systemd[1]: Starting systemd-networkd.service... Feb 9 19:45:40.178580 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:45:40.208351 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:45:40.224426 systemd[1]: Started systemd-userdbd.service. Feb 9 19:45:40.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.238078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:45:40.240836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:45:40.252891 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:45:40.272775 systemd-networkd[1073]: lo: Link UP Feb 9 19:45:40.272786 systemd-networkd[1073]: lo: Gained carrier Feb 9 19:45:40.273276 systemd-networkd[1073]: Enumeration completed Feb 9 19:45:40.273383 systemd-networkd[1073]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:45:40.273387 systemd[1]: Started systemd-networkd.service. Feb 9 19:45:40.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.274495 systemd-networkd[1073]: eth0: Link UP Feb 9 19:45:40.274504 systemd-networkd[1073]: eth0: Gained carrier Feb 9 19:45:40.260000 audit[1068]: AVC avc: denied { confidentiality } for pid=1068 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:45:40.260000 audit[1068]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a17c628020 a1=32194 a2=7f5164eabbc5 a3=5 items=108 ppid=1064 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.260000 audit: CWD cwd="/" Feb 9 19:45:40.260000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=1 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=2 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=3 name=(null) inode=670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=4 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=5 name=(null) inode=671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=6 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=7 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=8 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=9 name=(null) inode=673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=10 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=11 name=(null) inode=674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=12 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=13 name=(null) inode=675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=14 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=15 name=(null) inode=676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=16 name=(null) inode=672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=17 name=(null) inode=677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=18 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=19 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=20 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=21 name=(null) inode=679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=22 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=23 name=(null) inode=680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=24 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=25 name=(null) inode=681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=26 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=27 name=(null) inode=682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=28 name=(null) inode=678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=29 name=(null) inode=683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=30 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=31 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=32 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=33 name=(null) inode=685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=34 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=35 name=(null) inode=686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=36 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=37 name=(null) inode=687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=38 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=39 name=(null) inode=688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=40 name=(null) inode=684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=41 name=(null) inode=689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=42 name=(null) inode=669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=43 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=44 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=45 name=(null) inode=691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=46 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=47 name=(null) inode=692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=48 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=49 name=(null) inode=693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=50 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=51 name=(null) inode=694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=52 name=(null) inode=690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=53 name=(null) inode=695 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=55 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=56 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=57 name=(null) inode=697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=58 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=59 name=(null) inode=698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=60 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=61 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=62 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=63 name=(null) inode=700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=64 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=65 name=(null) inode=701 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=66 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=67 name=(null) inode=702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=68 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=69 name=(null) inode=703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=70 name=(null) inode=699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=71 name=(null) inode=704 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=72 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=73 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=74 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=75 name=(null) inode=706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=76 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=77 name=(null) inode=707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=78 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=79 name=(null) inode=708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=80 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=81 name=(null) inode=709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=82 name=(null) inode=705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=83 name=(null) inode=710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=84 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=85 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=86 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=87 name=(null) inode=712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=88 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=89 name=(null) inode=713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=90 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=91 name=(null) inode=714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=92 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=93 name=(null) inode=715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=94 name=(null) inode=711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=95 name=(null) inode=716 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=96 name=(null) inode=696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=97 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=98 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=99 name=(null) inode=718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=100 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=101 name=(null) inode=719 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=102 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=103 name=(null) inode=720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=104 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=105 name=(null) inode=721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=106 name=(null) inode=717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PATH item=107 name=(null) inode=722 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:40.260000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:45:40.285088 systemd-networkd[1073]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:45:40.285822 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:45:40.292825 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:45:40.295824 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:45:40.338151 kernel: kvm: Nested Virtualization enabled Feb 9 19:45:40.338249 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:45:40.338289 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:45:40.338974 kernel: SVM: Virtual GIF supported Feb 9 19:45:40.351837 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:45:40.376244 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:45:40.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.378183 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:45:40.384701 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:45:40.406452 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:45:40.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.407231 systemd[1]: Reached target cryptsetup.target. Feb 9 19:45:40.408913 systemd[1]: Starting lvm2-activation.service... Feb 9 19:45:40.412073 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:45:40.436744 systemd[1]: Finished lvm2-activation.service. Feb 9 19:45:40.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.437509 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:45:40.438123 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:45:40.438143 systemd[1]: Reached target local-fs.target. Feb 9 19:45:40.438686 systemd[1]: Reached target machines.target. Feb 9 19:45:40.440437 systemd[1]: Starting ldconfig.service... Feb 9 19:45:40.441190 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:45:40.441233 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:40.442123 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:45:40.443519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:45:40.445160 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:45:40.445882 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:45:40.445919 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:45:40.446692 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:45:40.447561 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Feb 9 19:45:40.448533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:45:40.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.451547 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:45:40.454680 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:45:40.455518 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:45:40.458261 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:45:40.481408 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Feb 9 19:45:40.481408 systemd-fsck[1115]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:45:40.482476 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:45:40.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.484835 systemd[1]: Mounting boot.mount... Feb 9 19:45:40.504132 systemd[1]: Mounted boot.mount. Feb 9 19:45:40.923513 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:45:40.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.948400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:45:40.949075 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:45:40.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.964042 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:45:40.969499 systemd[1]: Finished ldconfig.service. Feb 9 19:45:40.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.993070 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:45:40.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:40.995645 systemd[1]: Starting audit-rules.service... Feb 9 19:45:40.997505 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:45:40.999282 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:45:41.002074 systemd[1]: Starting systemd-resolved.service... Feb 9 19:45:41.006722 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:45:41.008724 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:45:41.010300 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:45:41.011580 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:45:41.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.013000 audit[1136]: SYSTEM_BOOT pid=1136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.017720 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:45:41.019056 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:45:41.021677 systemd[1]: Starting systemd-update-done.service... Feb 9 19:45:41.027426 systemd[1]: Finished systemd-update-done.service. Feb 9 19:45:41.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.037000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:45:41.037000 audit[1149]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1ff263a0 a2=420 a3=0 items=0 ppid=1124 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.037000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:45:41.038761 augenrules[1149]: No rules Feb 9 19:45:41.039560 systemd[1]: Finished audit-rules.service. Feb 9 19:45:41.080443 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:45:41.081487 systemd[1]: Reached target time-set.target. Feb 9 19:45:40.185833 systemd-timesyncd[1135]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:45:40.208045 systemd-journald[1007]: Time jumped backwards, rotating. Feb 9 19:45:40.185868 systemd-timesyncd[1135]: Initial clock synchronization to Fri 2024-02-09 19:45:40.185772 UTC. Feb 9 19:45:40.192237 systemd-resolved[1131]: Positive Trust Anchors: Feb 9 19:45:40.192252 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:45:40.192288 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:45:40.199430 systemd-resolved[1131]: Defaulting to hostname 'linux'. Feb 9 19:45:40.201019 systemd[1]: Started systemd-resolved.service. Feb 9 19:45:40.201884 systemd[1]: Reached target network.target. Feb 9 19:45:40.202571 systemd[1]: Reached target nss-lookup.target. Feb 9 19:45:40.203249 systemd[1]: Reached target sysinit.target. Feb 9 19:45:40.203966 systemd[1]: Started motdgen.path. Feb 9 19:45:40.204559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:45:40.205548 systemd[1]: Started logrotate.timer. Feb 9 19:45:40.206203 systemd[1]: Started mdadm.timer. Feb 9 19:45:40.206826 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:45:40.207633 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:45:40.207649 systemd[1]: Reached target paths.target. Feb 9 19:45:40.208277 systemd[1]: Reached target timers.target. Feb 9 19:45:40.209164 systemd[1]: Listening on dbus.socket. Feb 9 19:45:40.210785 systemd[1]: Starting docker.socket... Feb 9 19:45:40.212157 systemd[1]: Listening on sshd.socket. Feb 9 19:45:40.212858 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:40.213100 systemd[1]: Listening on docker.socket. Feb 9 19:45:40.213754 systemd[1]: Reached target sockets.target. Feb 9 19:45:40.214457 systemd[1]: Reached target basic.target. Feb 9 19:45:40.215172 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:45:40.215209 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:45:40.215226 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:45:40.216118 systemd[1]: Starting containerd.service... Feb 9 19:45:40.217622 systemd[1]: Starting dbus.service... Feb 9 19:45:40.219041 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:45:40.221138 systemd[1]: Starting extend-filesystems.service... Feb 9 19:45:40.221997 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:45:40.222978 systemd[1]: Starting motdgen.service... Feb 9 19:45:40.224140 jq[1162]: false Feb 9 19:45:40.226408 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:45:40.228519 systemd[1]: Starting prepare-critools.service... Feb 9 19:45:40.230602 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:45:40.231204 dbus-daemon[1161]: [system] SELinux support is enabled Feb 9 19:45:40.232593 systemd[1]: Starting sshd-keygen.service... Feb 9 19:45:40.236266 systemd[1]: Starting systemd-logind.service... Feb 9 19:45:40.237134 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:40.270313 extend-filesystems[1163]: Found sr0 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda1 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda2 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda3 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found usr Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda4 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda6 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda7 Feb 9 19:45:40.270313 extend-filesystems[1163]: Found vda9 Feb 9 19:45:40.270313 extend-filesystems[1163]: Checking size of /dev/vda9 Feb 9 19:45:40.237193 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:45:40.238627 systemd[1]: Starting update-engine.service... Feb 9 19:45:40.284178 jq[1183]: true Feb 9 19:45:40.242458 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:45:40.243931 systemd[1]: Started dbus.service. Feb 9 19:45:40.284434 tar[1190]: ./ Feb 9 19:45:40.284434 tar[1190]: ./macvlan Feb 9 19:45:40.248049 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:45:40.284717 tar[1191]: crictl Feb 9 19:45:40.284920 extend-filesystems[1163]: Resized partition /dev/vda9 Feb 9 19:45:40.319935 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:45:40.248313 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:45:40.320039 jq[1192]: true Feb 9 19:45:40.320564 update_engine[1180]: I0209 19:45:40.297318 1180 main.cc:92] Flatcar Update Engine starting Feb 9 19:45:40.320564 update_engine[1180]: I0209 19:45:40.299180 1180 update_check_scheduler.cc:74] Next update check in 11m46s Feb 9 19:45:40.320761 extend-filesystems[1223]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:45:40.248794 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:45:40.321849 env[1193]: time="2024-02-09T19:45:40.289815578Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:45:40.249079 systemd[1]: Finished motdgen.service. Feb 9 19:45:40.254661 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:45:40.322155 bash[1220]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:45:40.254708 systemd[1]: Reached target system-config.target. Feb 9 19:45:40.255575 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:45:40.255589 systemd[1]: Reached target user-config.target. Feb 9 19:45:40.259751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:45:40.260066 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:45:40.284777 systemd-logind[1179]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:45:40.284791 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:45:40.284949 systemd-logind[1179]: New seat seat0. Feb 9 19:45:40.314704 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:45:40.325929 systemd[1]: Started update-engine.service. Feb 9 19:45:40.333713 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:45:40.334198 systemd[1]: Started systemd-logind.service. Feb 9 19:45:40.351279 env[1193]: time="2024-02-09T19:45:40.337496470Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:45:40.351325 extend-filesystems[1223]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:45:40.351325 extend-filesystems[1223]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:45:40.351325 extend-filesystems[1223]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:45:40.369698 tar[1190]: ./static Feb 9 19:45:40.352383 systemd[1]: Started locksmithd.service. Feb 9 19:45:40.369781 extend-filesystems[1163]: Resized filesystem in /dev/vda9 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.351710762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354052683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354124247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354489181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354513216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354531080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354545497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354635496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.354902737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:40.370415 env[1193]: time="2024-02-09T19:45:40.355093064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:40.362967 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.355113813Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.355170729Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.355187310Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359854832Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359885379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359906649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359944690Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359973805Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.359993823Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.360010804Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.360029199Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.360047513Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.360065667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.370805 env[1193]: time="2024-02-09T19:45:40.360082359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.363169 systemd[1]: Finished extend-filesystems.service. Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360099300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360196763Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360289918Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360736184Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360767774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360785797Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360843145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360861409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360878862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360894271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360913627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360930228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360948262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371092 env[1193]: time="2024-02-09T19:45:40.360976395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.364000 systemd[1]: Started containerd.service. Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.360994228Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361130424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361150702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361167122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361184645Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361203090Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361217056Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361239989Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:45:40.371362 env[1193]: time="2024-02-09T19:45:40.361280675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.361537427Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.361608490Z" level=info msg="Connect containerd service" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.361648345Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362216099Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362462481Z" level=info msg="Start subscribing containerd event" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362513968Z" level=info msg="Start recovering state" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362571696Z" level=info msg="Start event monitor" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362586023Z" level=info msg="Start snapshots syncer" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362598326Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362606852Z" level=info msg="Start streaming server" Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.362867571Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.363024786Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:45:40.371522 env[1193]: time="2024-02-09T19:45:40.363089818Z" level=info msg="containerd successfully booted in 0.073870s" Feb 9 19:45:40.380702 tar[1190]: ./vlan Feb 9 19:45:40.407789 tar[1190]: ./portmap Feb 9 19:45:40.421602 locksmithd[1233]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:45:40.435900 tar[1190]: ./host-local Feb 9 19:45:40.460629 tar[1190]: ./vrf Feb 9 19:45:40.487706 tar[1190]: ./bridge Feb 9 19:45:40.519010 tar[1190]: ./tuning Feb 9 19:45:40.544617 tar[1190]: ./firewall Feb 9 19:45:40.577557 tar[1190]: ./host-device Feb 9 19:45:40.606357 tar[1190]: ./sbr Feb 9 19:45:40.632382 tar[1190]: ./loopback Feb 9 19:45:40.657460 tar[1190]: ./dhcp Feb 9 19:45:40.683758 systemd[1]: Finished prepare-critools.service. Feb 9 19:45:40.725325 tar[1190]: ./ptp Feb 9 19:45:40.753177 tar[1190]: ./ipvlan Feb 9 19:45:40.780010 tar[1190]: ./bandwidth Feb 9 19:45:40.812996 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:45:40.922499 sshd_keygen[1185]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:45:40.940788 systemd[1]: Finished sshd-keygen.service. Feb 9 19:45:40.943357 systemd[1]: Starting issuegen.service... Feb 9 19:45:40.948117 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:45:40.948299 systemd[1]: Finished issuegen.service. Feb 9 19:45:40.950092 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:45:40.954640 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:45:40.956818 systemd[1]: Started getty@tty1.service. Feb 9 19:45:40.958479 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:45:40.959281 systemd[1]: Reached target getty.target. Feb 9 19:45:40.960036 systemd[1]: Reached target multi-user.target. Feb 9 19:45:40.961551 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:45:40.967872 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:45:40.968058 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:45:40.968840 systemd[1]: Startup finished in 5.247s (kernel) + 5.484s (userspace) = 10.732s. Feb 9 19:45:41.137226 systemd[1]: Created slice system-sshd.slice. Feb 9 19:45:41.138418 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:42258.service. Feb 9 19:45:41.174351 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.175600 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.184112 systemd-logind[1179]: New session 1 of user core. Feb 9 19:45:41.185097 systemd[1]: Created slice user-500.slice. Feb 9 19:45:41.186073 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:45:41.193427 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:45:41.194360 systemd[1]: Starting user@500.service... Feb 9 19:45:41.196877 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.245799 systemd-networkd[1073]: eth0: Gained IPv6LL Feb 9 19:45:41.260502 systemd[1271]: Queued start job for default target default.target. Feb 9 19:45:41.260728 systemd[1271]: Reached target paths.target. Feb 9 19:45:41.260742 systemd[1271]: Reached target sockets.target. Feb 9 19:45:41.260753 systemd[1271]: Reached target timers.target. Feb 9 19:45:41.260764 systemd[1271]: Reached target basic.target. Feb 9 19:45:41.260799 systemd[1271]: Reached target default.target. Feb 9 19:45:41.260819 systemd[1271]: Startup finished in 58ms. Feb 9 19:45:41.260878 systemd[1]: Started user@500.service. Feb 9 19:45:41.261700 systemd[1]: Started session-1.scope. Feb 9 19:45:41.311905 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:42274.service. Feb 9 19:45:41.340812 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 42274 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.341833 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.345051 systemd-logind[1179]: New session 2 of user core. Feb 9 19:45:41.345732 systemd[1]: Started session-2.scope. Feb 9 19:45:41.397982 sshd[1280]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:41.399904 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:42282.service. Feb 9 19:45:41.400267 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:42274.service: Deactivated successfully. Feb 9 19:45:41.401045 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:45:41.401069 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:45:41.401745 systemd-logind[1179]: Removed session 2. Feb 9 19:45:41.428706 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.429621 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.432142 systemd-logind[1179]: New session 3 of user core. Feb 9 19:45:41.432722 systemd[1]: Started session-3.scope. Feb 9 19:45:41.480639 sshd[1285]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:41.482583 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:42292.service. Feb 9 19:45:41.482924 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:42282.service: Deactivated successfully. Feb 9 19:45:41.483620 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:45:41.483649 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:45:41.484385 systemd-logind[1179]: Removed session 3. Feb 9 19:45:41.510287 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 42292 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.511037 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.513719 systemd-logind[1179]: New session 4 of user core. Feb 9 19:45:41.514509 systemd[1]: Started session-4.scope. Feb 9 19:45:41.566638 sshd[1293]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:41.568735 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:42296.service. Feb 9 19:45:41.569089 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:42292.service: Deactivated successfully. Feb 9 19:45:41.569854 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:45:41.569873 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:45:41.571008 systemd-logind[1179]: Removed session 4. Feb 9 19:45:41.597865 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 42296 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.598852 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.601616 systemd-logind[1179]: New session 5 of user core. Feb 9 19:45:41.602323 systemd[1]: Started session-5.scope. Feb 9 19:45:41.655924 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:45:41.656086 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:45:41.667873 dbus-daemon[1161]: \xd0=\xa8\xa6\xedU: received setenforce notice (enforcing=1551989216) Feb 9 19:45:41.669744 sudo[1305]: pam_unix(sudo:session): session closed for user root Feb 9 19:45:41.671254 sshd[1299]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:41.673860 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:42298.service. Feb 9 19:45:41.674371 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:42296.service: Deactivated successfully. Feb 9 19:45:41.675279 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:45:41.675346 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:45:41.676202 systemd-logind[1179]: Removed session 5. Feb 9 19:45:41.702889 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 42298 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.703872 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.706702 systemd-logind[1179]: New session 6 of user core. Feb 9 19:45:41.707366 systemd[1]: Started session-6.scope. Feb 9 19:45:41.757952 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:45:41.758108 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:45:41.760258 sudo[1314]: pam_unix(sudo:session): session closed for user root Feb 9 19:45:41.764143 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:45:41.764309 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:45:41.771528 systemd[1]: Stopping audit-rules.service... Feb 9 19:45:41.770000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:45:41.772396 auditctl[1317]: No rules Feb 9 19:45:41.772622 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:45:41.772792 systemd[1]: Stopped audit-rules.service. Feb 9 19:45:41.776012 kernel: kauditd_printk_skb: 201 callbacks suppressed Feb 9 19:45:41.776049 kernel: audit: type=1305 audit(1707507941.770:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:45:41.776064 kernel: audit: type=1300 audit(1707507941.770:130): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde41a9390 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.770000 audit[1317]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde41a9390 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.773882 systemd[1]: Starting audit-rules.service... Feb 9 19:45:41.777112 kernel: audit: type=1327 audit(1707507941.770:130): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:45:41.770000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:45:41.777979 kernel: audit: type=1131 audit(1707507941.771:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.788778 augenrules[1335]: No rules Feb 9 19:45:41.789294 systemd[1]: Finished audit-rules.service. Feb 9 19:45:41.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.790225 sudo[1313]: pam_unix(sudo:session): session closed for user root Feb 9 19:45:41.788000 audit[1313]: USER_END pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.792360 sshd[1308]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:41.793182 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:42304.service. Feb 9 19:45:41.794215 kernel: audit: type=1130 audit(1707507941.787:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.794249 kernel: audit: type=1106 audit(1707507941.788:133): pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.794263 kernel: audit: type=1104 audit(1707507941.788:134): pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.788000 audit[1313]: CRED_DISP pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.795587 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:42298.service: Deactivated successfully. Feb 9 19:45:41.796116 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:45:41.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:42304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.796661 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:45:41.797195 systemd-logind[1179]: Removed session 6. Feb 9 19:45:41.798596 kernel: audit: type=1130 audit(1707507941.791:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:42304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.798629 kernel: audit: type=1106 audit(1707507941.792:136): pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.792000 audit[1308]: USER_END pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.792000 audit[1308]: CRED_DISP pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.803706 kernel: audit: type=1104 audit(1707507941.792:137): pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.71:22-10.0.0.1:42298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.822000 audit[1340]: USER_ACCT pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.824668 sshd[1340]: Accepted publickey for core from 10.0.0.1 port 42304 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:41.823000 audit[1340]: CRED_ACQ pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.823000 audit[1340]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb16af310 a2=3 a3=0 items=0 ppid=1 pid=1340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.823000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:45:41.825617 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:41.828612 systemd-logind[1179]: New session 7 of user core. Feb 9 19:45:41.829451 systemd[1]: Started session-7.scope. Feb 9 19:45:41.830000 audit[1340]: USER_START pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.831000 audit[1345]: CRED_ACQ pid=1345 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:41.877000 audit[1346]: USER_ACCT pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.879537 sudo[1346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:45:41.877000 audit[1346]: CRED_REFR pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:41.879718 sudo[1346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:45:41.879000 audit[1346]: USER_START pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.387964 systemd[1]: Reloading. Feb 9 19:45:42.443946 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2024-02-09T19:45:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:42.443971 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2024-02-09T19:45:42Z" level=info msg="torcx already run" Feb 9 19:45:42.507785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:42.507798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:42.526097 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:42.586513 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:45:42.590988 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:45:42.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.591388 systemd[1]: Reached target network-online.target. Feb 9 19:45:42.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.592461 systemd[1]: Started kubelet.service. Feb 9 19:45:42.602268 systemd[1]: Starting coreos-metadata.service... Feb 9 19:45:42.609389 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 19:45:42.609671 systemd[1]: Finished coreos-metadata.service. Feb 9 19:45:42.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.637021 kubelet[1424]: E0209 19:45:42.636962 1424 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:45:42.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:45:42.638897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:45:42.639050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:45:42.773338 systemd[1]: Stopped kubelet.service. Feb 9 19:45:42.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:42.786425 systemd[1]: Reloading. Feb 9 19:45:42.834493 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-02-09T19:45:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:42.834520 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-02-09T19:45:42Z" level=info msg="torcx already run" Feb 9 19:45:42.903732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:42.903747 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:42.922170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:42.987300 systemd[1]: Started kubelet.service. Feb 9 19:45:42.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:43.021597 kubelet[1543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:43.021597 kubelet[1543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:43.021955 kubelet[1543]: I0209 19:45:43.021616 1543 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:45:43.022903 kubelet[1543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:43.022903 kubelet[1543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:43.130530 kubelet[1543]: I0209 19:45:43.130497 1543 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:45:43.130530 kubelet[1543]: I0209 19:45:43.130526 1543 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:45:43.130779 kubelet[1543]: I0209 19:45:43.130763 1543 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:45:43.132359 kubelet[1543]: I0209 19:45:43.132321 1543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:43.136629 kubelet[1543]: I0209 19:45:43.136614 1543 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:45:43.136963 kubelet[1543]: I0209 19:45:43.136945 1543 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:45:43.137017 kubelet[1543]: I0209 19:45:43.137005 1543 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:45:43.137106 kubelet[1543]: I0209 19:45:43.137023 1543 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:45:43.137106 kubelet[1543]: I0209 19:45:43.137032 1543 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:45:43.137106 kubelet[1543]: I0209 19:45:43.137106 1543 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:43.141195 kubelet[1543]: I0209 19:45:43.141180 1543 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:45:43.141195 kubelet[1543]: I0209 19:45:43.141197 1543 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:45:43.141279 kubelet[1543]: I0209 19:45:43.141216 1543 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:45:43.141279 kubelet[1543]: I0209 19:45:43.141228 1543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:45:43.141326 kubelet[1543]: E0209 19:45:43.141292 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:43.141326 kubelet[1543]: E0209 19:45:43.141317 1543 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:43.141749 kubelet[1543]: I0209 19:45:43.141727 1543 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:45:43.141972 kubelet[1543]: W0209 19:45:43.141956 1543 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:45:43.142285 kubelet[1543]: I0209 19:45:43.142264 1543 server.go:1186] "Started kubelet" Feb 9 19:45:43.142326 kubelet[1543]: I0209 19:45:43.142307 1543 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:45:43.143160 kubelet[1543]: I0209 19:45:43.143140 1543 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:45:43.141000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:43.141000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:43.141000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000050c90 a1=c00005b128 a2=c000050c60 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.141000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:43.141000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:43.141000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:43.141000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00014dd60 a1=c00005b140 a2=c000050d20 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.141000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:43.143621 kubelet[1543]: E0209 19:45:43.143220 1543 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:45:43.143621 kubelet[1543]: E0209 19:45:43.143239 1543 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:45:43.143621 kubelet[1543]: I0209 19:45:43.143293 1543 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:45:43.143621 kubelet[1543]: I0209 19:45:43.143322 1543 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:45:43.143621 kubelet[1543]: I0209 19:45:43.143368 1543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:45:43.143852 kubelet[1543]: I0209 19:45:43.143830 1543 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:45:43.144052 kubelet[1543]: I0209 19:45:43.143902 1543 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:45:43.155497 kubelet[1543]: W0209 19:45:43.155417 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:43.155497 kubelet[1543]: E0209 19:45:43.155445 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:43.155497 kubelet[1543]: W0209 19:45:43.155469 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:43.155497 kubelet[1543]: E0209 19:45:43.155478 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:43.155641 kubelet[1543]: E0209 19:45:43.155507 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d7226cd76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 142247798, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 142247798, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.155755 kubelet[1543]: E0209 19:45:43.155700 1543 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:43.155755 kubelet[1543]: W0209 19:45:43.155732 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:43.155755 kubelet[1543]: E0209 19:45:43.155743 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:43.157092 kubelet[1543]: E0209 19:45:43.157048 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d7235d0cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 143231693, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 143231693, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.163000 audit[1556]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.163000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec261cf40 a2=0 a3=7ffec261cf2c items=0 ppid=1543 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:45:43.164000 audit[1560]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.164000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd21f3a550 a2=0 a3=7ffd21f3a53c items=0 ppid=1543 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:45:43.174521 kubelet[1543]: I0209 19:45:43.174495 1543 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:45:43.174521 kubelet[1543]: I0209 19:45:43.174515 1543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:45:43.174607 kubelet[1543]: I0209 19:45:43.174532 1543 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:43.174997 kubelet[1543]: E0209 19:45:43.174915 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.175662 kubelet[1543]: E0209 19:45:43.175617 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.176435 kubelet[1543]: E0209 19:45:43.176382 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.165000 audit[1562]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.165000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc32259830 a2=0 a3=7ffc3225981c items=0 ppid=1543 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:45:43.180000 audit[1568]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.180000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc911b5610 a2=0 a3=7ffc911b55fc items=0 ppid=1543 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:45:43.222000 audit[1573]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.222000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeae8a8110 a2=0 a3=7ffeae8a80fc items=0 ppid=1543 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:45:43.223000 audit[1574]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.223000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffda6776b10 a2=0 a3=7ffda6776afc items=0 ppid=1543 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:45:43.226000 audit[1577]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.226000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcffd1e4e0 a2=0 a3=7ffcffd1e4cc items=0 ppid=1543 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:45:43.229000 audit[1580]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.229000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc23034e70 a2=0 a3=7ffc23034e5c items=0 ppid=1543 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:45:43.230000 audit[1581]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.230000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6db365c0 a2=0 a3=7ffd6db365ac items=0 ppid=1543 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:45:43.231000 audit[1582]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.231000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff487e0bd0 a2=0 a3=7fff487e0bbc items=0 ppid=1543 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:45:43.232000 audit[1584]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.232000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffddf905020 a2=0 a3=7ffddf90500c items=0 ppid=1543 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:45:43.245239 kubelet[1543]: I0209 19:45:43.245216 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:43.246000 kubelet[1543]: E0209 19:45:43.245976 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:43.246329 kubelet[1543]: E0209 19:45:43.246254 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 245179465, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.247047 kubelet[1543]: E0209 19:45:43.246984 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 245189504, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.247702 kubelet[1543]: E0209 19:45:43.247653 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 245192038, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.251410 kubelet[1543]: I0209 19:45:43.251386 1543 policy_none.go:49] "None policy: Start" Feb 9 19:45:43.252035 kubelet[1543]: I0209 19:45:43.252022 1543 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:45:43.252118 kubelet[1543]: I0209 19:45:43.252098 1543 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:45:43.234000 audit[1586]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.234000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc005eb660 a2=0 a3=7ffc005eb64c items=0 ppid=1543 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:45:43.252000 audit[1589]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.252000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffeb209bc20 a2=0 a3=7ffeb209bc0c items=0 ppid=1543 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:45:43.254000 audit[1591]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.254000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff16604580 a2=0 a3=7fff1660456c items=0 ppid=1543 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:45:43.259000 audit[1594]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.259000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffd6d813830 a2=0 a3=7ffd6d81381c items=0 ppid=1543 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:45:43.261635 kubelet[1543]: I0209 19:45:43.261616 1543 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:45:43.260000 audit[1595]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.260000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9dcd1c90 a2=0 a3=7ffe9dcd1c7c items=0 ppid=1543 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:45:43.260000 audit[1596]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.260000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc0069c00 a2=0 a3=7ffcc0069bec items=0 ppid=1543 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:45:43.261000 audit[1597]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.261000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde72a9980 a2=0 a3=7ffde72a996c items=0 ppid=1543 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.261000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:45:43.261000 audit[1598]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.261000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb1d05170 a2=0 a3=7ffeb1d0515c items=0 ppid=1543 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:45:43.262000 audit[1600]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:43.262000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcf79cd80 a2=0 a3=7ffdcf79cd6c items=0 ppid=1543 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:45:43.262000 audit[1601]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.262000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff0da57ec0 a2=0 a3=7fff0da57eac items=0 ppid=1543 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:45:43.263000 audit[1602]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.263000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe7e0da710 a2=0 a3=7ffe7e0da6fc items=0 ppid=1543 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:45:43.265000 audit[1604]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.265000 audit[1604]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc95e5e0b0 a2=0 a3=7ffc95e5e09c items=0 ppid=1543 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:45:43.266000 audit[1605]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.266000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5d25c740 a2=0 a3=7ffd5d25c72c items=0 ppid=1543 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:45:43.266000 audit[1606]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.266000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea7a20660 a2=0 a3=7ffea7a2064c items=0 ppid=1543 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:45:43.268000 audit[1608]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.268000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe5bfb34b0 a2=0 a3=7ffe5bfb349c items=0 ppid=1543 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:45:43.269000 audit[1610]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.269000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff13a02b40 a2=0 a3=7fff13a02b2c items=0 ppid=1543 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:45:43.271000 audit[1612]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.271000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc68b8a6b0 a2=0 a3=7ffc68b8a69c items=0 ppid=1543 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:45:43.272000 audit[1614]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.272000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc67b60fe0 a2=0 a3=7ffc67b60fcc items=0 ppid=1543 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:45:43.274000 audit[1616]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.274000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdc0d3b310 a2=0 a3=7ffdc0d3b2fc items=0 ppid=1543 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:45:43.276647 kubelet[1543]: I0209 19:45:43.276630 1543 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:45:43.276647 kubelet[1543]: I0209 19:45:43.276646 1543 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:45:43.276708 kubelet[1543]: I0209 19:45:43.276661 1543 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:45:43.276731 kubelet[1543]: E0209 19:45:43.276720 1543 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:45:43.275000 audit[1617]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.275000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee4c4c570 a2=0 a3=7ffee4c4c55c items=0 ppid=1543 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:45:43.277533 kubelet[1543]: W0209 19:45:43.277518 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:43.277577 kubelet[1543]: E0209 19:45:43.277542 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:43.276000 audit[1618]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.276000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6f20d4c0 a2=0 a3=7ffc6f20d4ac items=0 ppid=1543 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:45:43.277000 audit[1619]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:43.277000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbd848540 a2=0 a3=7ffdbd84852c items=0 ppid=1543 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:45:43.288028 kubelet[1543]: I0209 19:45:43.288007 1543 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:45:43.288183 kubelet[1543]: I0209 19:45:43.288168 1543 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:45:43.286000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:43.286000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:43.286000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00130c900 a1=c000a7b830 a2=c00130c8d0 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:43.286000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:43.288544 kubelet[1543]: I0209 19:45:43.288532 1543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:45:43.289167 kubelet[1543]: E0209 19:45:43.289144 1543 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.71\" not found" Feb 9 19:45:43.289905 kubelet[1543]: E0209 19:45:43.289823 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d7ae21973", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 288740211, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 288740211, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.356669 kubelet[1543]: E0209 19:45:43.356649 1543 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:43.446638 kubelet[1543]: I0209 19:45:43.446570 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:43.447302 kubelet[1543]: E0209 19:45:43.447289 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:43.447561 kubelet[1543]: E0209 19:45:43.447448 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 446511021, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.448351 kubelet[1543]: E0209 19:45:43.448275 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 446537241, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.543698 kubelet[1543]: E0209 19:45:43.543603 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 446540727, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.757871 kubelet[1543]: E0209 19:45:43.757812 1543 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:43.848791 kubelet[1543]: I0209 19:45:43.848738 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:43.849786 kubelet[1543]: E0209 19:45:43.849713 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 848705416, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:43.849786 kubelet[1543]: E0209 19:45:43.849763 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:43.944042 kubelet[1543]: E0209 19:45:43.943946 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 848711768, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:44.142159 kubelet[1543]: E0209 19:45:44.142040 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:44.144263 kubelet[1543]: E0209 19:45:44.144162 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 848714052, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:44.171922 kubelet[1543]: W0209 19:45:44.171887 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:44.171922 kubelet[1543]: E0209 19:45:44.171923 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:44.267701 kubelet[1543]: W0209 19:45:44.267654 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:44.267861 kubelet[1543]: E0209 19:45:44.267715 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:44.559120 kubelet[1543]: E0209 19:45:44.559014 1543 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:44.567922 kubelet[1543]: W0209 19:45:44.567902 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:44.567977 kubelet[1543]: E0209 19:45:44.567936 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:44.650918 kubelet[1543]: I0209 19:45:44.650891 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:44.652069 kubelet[1543]: E0209 19:45:44.652006 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 44, 650853243, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:44.652069 kubelet[1543]: E0209 19:45:44.652060 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:44.652814 kubelet[1543]: E0209 19:45:44.652769 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 44, 650865075, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:44.685960 kubelet[1543]: W0209 19:45:44.685926 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:44.685960 kubelet[1543]: E0209 19:45:44.685959 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:44.744071 kubelet[1543]: E0209 19:45:44.743983 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 44, 650867339, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:45.142822 kubelet[1543]: E0209 19:45:45.142780 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:46.015774 kubelet[1543]: W0209 19:45:46.015730 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:46.015774 kubelet[1543]: E0209 19:45:46.015763 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:46.143516 kubelet[1543]: E0209 19:45:46.143443 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:46.160074 kubelet[1543]: E0209 19:45:46.160032 1543 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:46.252948 kubelet[1543]: I0209 19:45:46.252917 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:46.254316 kubelet[1543]: E0209 19:45:46.254292 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:46.254359 kubelet[1543]: E0209 19:45:46.254283 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 46, 252873591, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:46.255068 kubelet[1543]: E0209 19:45:46.255027 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 46, 252884431, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:46.255921 kubelet[1543]: E0209 19:45:46.255878 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 46, 252886855, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:46.504041 kubelet[1543]: W0209 19:45:46.503944 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:46.504041 kubelet[1543]: E0209 19:45:46.503978 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:46.870952 kubelet[1543]: W0209 19:45:46.870845 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:46.870952 kubelet[1543]: E0209 19:45:46.870878 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:46.922920 kubelet[1543]: W0209 19:45:46.922889 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:46.922920 kubelet[1543]: E0209 19:45:46.922913 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:47.144037 kubelet[1543]: E0209 19:45:47.143893 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:48.144307 kubelet[1543]: E0209 19:45:48.144257 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:49.144808 kubelet[1543]: E0209 19:45:49.144760 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:49.361888 kubelet[1543]: E0209 19:45:49.361848 1543 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.71" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:45:49.456075 kubelet[1543]: I0209 19:45:49.455982 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:49.457079 kubelet[1543]: E0209 19:45:49.457051 1543 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.71" Feb 9 19:45:49.457146 kubelet[1543]: E0209 19:45:49.457047 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74090b5c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.71 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173851996, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 49, 455929399, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74090b5c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:49.458016 kubelet[1543]: E0209 19:45:49.457968 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d740953b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.71 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173870521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 49, 455941552, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d740953b9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:49.458566 kubelet[1543]: E0209 19:45:49.458520 1543 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71.17b2496d74096903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.71", UID:"10.0.0.71", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.71 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.71"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 43, 173875971, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 49, 455950349, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.71.17b2496d74096903" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:45:50.145814 kubelet[1543]: E0209 19:45:50.145761 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:50.475556 kubelet[1543]: W0209 19:45:50.475462 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:50.475556 kubelet[1543]: E0209 19:45:50.475494 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:45:50.628337 kubelet[1543]: W0209 19:45:50.628307 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:50.628337 kubelet[1543]: E0209 19:45:50.628334 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:45:50.907086 kubelet[1543]: W0209 19:45:50.906916 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:50.907086 kubelet[1543]: E0209 19:45:50.907039 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:45:51.146939 kubelet[1543]: E0209 19:45:51.146877 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:52.148018 kubelet[1543]: E0209 19:45:52.147957 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:53.048931 kubelet[1543]: W0209 19:45:53.048893 1543 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:53.048931 kubelet[1543]: E0209 19:45:53.048924 1543 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:45:53.132440 kubelet[1543]: I0209 19:45:53.132375 1543 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:45:53.148929 kubelet[1543]: E0209 19:45:53.148885 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:53.289860 kubelet[1543]: E0209 19:45:53.289829 1543 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.71\" not found" Feb 9 19:45:53.504564 kubelet[1543]: E0209 19:45:53.504458 1543 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.71" not found Feb 9 19:45:54.149220 kubelet[1543]: E0209 19:45:54.149180 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:54.651274 kubelet[1543]: E0209 19:45:54.651240 1543 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.71" not found Feb 9 19:45:55.150102 kubelet[1543]: E0209 19:45:55.149987 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:55.766999 kubelet[1543]: E0209 19:45:55.766951 1543 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.71\" not found" node="10.0.0.71" Feb 9 19:45:55.858425 kubelet[1543]: I0209 19:45:55.858393 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.71" Feb 9 19:45:56.051740 kubelet[1543]: I0209 19:45:56.051608 1543 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.71" Feb 9 19:45:56.150825 kubelet[1543]: E0209 19:45:56.150782 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:56.263225 kubelet[1543]: E0209 19:45:56.263198 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.363817 kubelet[1543]: E0209 19:45:56.363696 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.464603 kubelet[1543]: E0209 19:45:56.464499 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.499206 sudo[1346]: pam_unix(sudo:session): session closed for user root Feb 9 19:45:56.497000 audit[1346]: USER_END pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.500302 kernel: kauditd_printk_skb: 130 callbacks suppressed Feb 9 19:45:56.500379 kernel: audit: type=1106 audit(1707507956.497:191): pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.500592 sshd[1340]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:56.502604 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:42304.service: Deactivated successfully. Feb 9 19:45:56.503551 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:45:56.503648 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:45:56.497000 audit[1346]: CRED_DISP pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.504514 systemd-logind[1179]: Removed session 7. Feb 9 19:45:56.507427 kernel: audit: type=1104 audit(1707507956.497:192): pid=1346 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.507468 kernel: audit: type=1106 audit(1707507956.499:193): pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.499000 audit[1340]: USER_END pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.499000 audit[1340]: CRED_DISP pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.515248 kernel: audit: type=1104 audit(1707507956.499:194): pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.515287 kernel: audit: type=1131 audit(1707507956.501:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:42304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:42304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.565663 kubelet[1543]: E0209 19:45:56.565621 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.666058 kubelet[1543]: E0209 19:45:56.665953 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.766565 kubelet[1543]: E0209 19:45:56.766516 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.867105 kubelet[1543]: E0209 19:45:56.867059 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:56.967894 kubelet[1543]: E0209 19:45:56.967739 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.068323 kubelet[1543]: E0209 19:45:57.068265 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.151097 kubelet[1543]: E0209 19:45:57.151057 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:57.169229 kubelet[1543]: E0209 19:45:57.169182 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.269800 kubelet[1543]: E0209 19:45:57.269667 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.370204 kubelet[1543]: E0209 19:45:57.370163 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.470799 kubelet[1543]: E0209 19:45:57.470751 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.571417 kubelet[1543]: E0209 19:45:57.571286 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.671972 kubelet[1543]: E0209 19:45:57.671912 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.772517 kubelet[1543]: E0209 19:45:57.772463 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.873160 kubelet[1543]: E0209 19:45:57.873030 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:57.973728 kubelet[1543]: E0209 19:45:57.973658 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.074553 kubelet[1543]: E0209 19:45:58.074479 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.152321 kubelet[1543]: E0209 19:45:58.152176 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:58.175424 kubelet[1543]: E0209 19:45:58.175373 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.275901 kubelet[1543]: E0209 19:45:58.275867 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.376524 kubelet[1543]: E0209 19:45:58.376499 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.476988 kubelet[1543]: E0209 19:45:58.476922 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.577387 kubelet[1543]: E0209 19:45:58.577356 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.677993 kubelet[1543]: E0209 19:45:58.677959 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.778584 kubelet[1543]: E0209 19:45:58.778456 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.879022 kubelet[1543]: E0209 19:45:58.878979 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:58.979543 kubelet[1543]: E0209 19:45:58.979487 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.080196 kubelet[1543]: E0209 19:45:59.080099 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.153067 kubelet[1543]: E0209 19:45:59.153018 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:59.180310 kubelet[1543]: E0209 19:45:59.180239 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.280938 kubelet[1543]: E0209 19:45:59.280888 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.381705 kubelet[1543]: E0209 19:45:59.381564 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.482238 kubelet[1543]: E0209 19:45:59.482189 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.582868 kubelet[1543]: E0209 19:45:59.582814 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.683775 kubelet[1543]: E0209 19:45:59.683639 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.784413 kubelet[1543]: E0209 19:45:59.784340 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.885153 kubelet[1543]: E0209 19:45:59.885083 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:45:59.986078 kubelet[1543]: E0209 19:45:59.985948 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.086613 kubelet[1543]: E0209 19:46:00.086554 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.153458 kubelet[1543]: E0209 19:46:00.153400 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:00.187639 kubelet[1543]: E0209 19:46:00.187585 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.288451 kubelet[1543]: E0209 19:46:00.288334 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.388612 kubelet[1543]: E0209 19:46:00.388587 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.489525 kubelet[1543]: E0209 19:46:00.489487 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.590258 kubelet[1543]: E0209 19:46:00.590146 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.691011 kubelet[1543]: E0209 19:46:00.690948 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.791569 kubelet[1543]: E0209 19:46:00.791522 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.892266 kubelet[1543]: E0209 19:46:00.892141 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:00.992707 kubelet[1543]: E0209 19:46:00.992656 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.093173 kubelet[1543]: E0209 19:46:01.093129 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.153985 kubelet[1543]: E0209 19:46:01.153884 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:01.194037 kubelet[1543]: E0209 19:46:01.194005 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.294758 kubelet[1543]: E0209 19:46:01.294699 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.395105 kubelet[1543]: E0209 19:46:01.395071 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.495652 kubelet[1543]: E0209 19:46:01.495532 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.595978 kubelet[1543]: E0209 19:46:01.595930 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.696455 kubelet[1543]: E0209 19:46:01.696424 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.796797 kubelet[1543]: E0209 19:46:01.796736 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.897063 kubelet[1543]: E0209 19:46:01.897033 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:01.997524 kubelet[1543]: E0209 19:46:01.997473 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.098007 kubelet[1543]: E0209 19:46:02.097902 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.154798 kubelet[1543]: E0209 19:46:02.154760 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:02.199039 kubelet[1543]: E0209 19:46:02.198996 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.299309 kubelet[1543]: E0209 19:46:02.299275 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.400040 kubelet[1543]: E0209 19:46:02.399982 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.500587 kubelet[1543]: E0209 19:46:02.500531 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.601129 kubelet[1543]: E0209 19:46:02.601078 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.701800 kubelet[1543]: E0209 19:46:02.701711 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.802367 kubelet[1543]: E0209 19:46:02.802310 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:02.902933 kubelet[1543]: E0209 19:46:02.902871 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:03.003536 kubelet[1543]: E0209 19:46:03.003421 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:03.104182 kubelet[1543]: E0209 19:46:03.104134 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:03.141567 kubelet[1543]: E0209 19:46:03.141534 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:03.155058 kubelet[1543]: E0209 19:46:03.155025 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:03.204281 kubelet[1543]: E0209 19:46:03.204235 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Feb 9 19:46:03.305166 kubelet[1543]: I0209 19:46:03.305079 1543 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:46:03.305357 env[1193]: time="2024-02-09T19:46:03.305310155Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:46:03.305626 kubelet[1543]: I0209 19:46:03.305532 1543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:46:04.153294 kubelet[1543]: I0209 19:46:04.153257 1543 apiserver.go:52] "Watching apiserver" Feb 9 19:46:04.155120 kubelet[1543]: E0209 19:46:04.155103 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:04.155392 kubelet[1543]: I0209 19:46:04.155231 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:04.155392 kubelet[1543]: I0209 19:46:04.155296 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:04.155392 kubelet[1543]: I0209 19:46:04.155323 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:04.155611 kubelet[1543]: E0209 19:46:04.155595 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:04.245088 kubelet[1543]: I0209 19:46:04.245052 1543 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:46:04.256851 kubelet[1543]: I0209 19:46:04.256826 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-cni-bin-dir\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.256893 kubelet[1543]: I0209 19:46:04.256859 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/82172907-09c1-4046-9b3d-fe68160d689e-varrun\") pod \"csi-node-driver-r59lv\" (UID: \"82172907-09c1-4046-9b3d-fe68160d689e\") " pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:04.256893 kubelet[1543]: I0209 19:46:04.256880 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82172907-09c1-4046-9b3d-fe68160d689e-socket-dir\") pod \"csi-node-driver-r59lv\" (UID: \"82172907-09c1-4046-9b3d-fe68160d689e\") " pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:04.256983 kubelet[1543]: I0209 19:46:04.256901 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-lib-modules\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.256983 kubelet[1543]: I0209 19:46:04.256920 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-var-lib-calico\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.256983 kubelet[1543]: I0209 19:46:04.256951 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpsl\" (UniqueName: \"kubernetes.io/projected/5de365a7-378e-4967-bae2-fd7e4dedb3b2-kube-api-access-7jpsl\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.256983 kubelet[1543]: I0209 19:46:04.256968 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82172907-09c1-4046-9b3d-fe68160d689e-registration-dir\") pod \"csi-node-driver-r59lv\" (UID: \"82172907-09c1-4046-9b3d-fe68160d689e\") " pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:04.256983 kubelet[1543]: I0209 19:46:04.256983 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c795f60b-8b03-4516-9cf7-42a0eb12582c-kube-proxy\") pod \"kube-proxy-cvzqs\" (UID: \"c795f60b-8b03-4516-9cf7-42a0eb12582c\") " pod="kube-system/kube-proxy-cvzqs" Feb 9 19:46:04.257082 kubelet[1543]: I0209 19:46:04.256999 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c795f60b-8b03-4516-9cf7-42a0eb12582c-lib-modules\") pod \"kube-proxy-cvzqs\" (UID: \"c795f60b-8b03-4516-9cf7-42a0eb12582c\") " pod="kube-system/kube-proxy-cvzqs" Feb 9 19:46:04.257082 kubelet[1543]: I0209 19:46:04.257016 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh77d\" (UniqueName: \"kubernetes.io/projected/c795f60b-8b03-4516-9cf7-42a0eb12582c-kube-api-access-mh77d\") pod \"kube-proxy-cvzqs\" (UID: \"c795f60b-8b03-4516-9cf7-42a0eb12582c\") " pod="kube-system/kube-proxy-cvzqs" Feb 9 19:46:04.257082 kubelet[1543]: I0209 19:46:04.257034 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5de365a7-378e-4967-bae2-fd7e4dedb3b2-node-certs\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257082 kubelet[1543]: I0209 19:46:04.257054 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5de365a7-378e-4967-bae2-fd7e4dedb3b2-tigera-ca-bundle\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257082 kubelet[1543]: I0209 19:46:04.257071 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-var-run-calico\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257183 kubelet[1543]: I0209 19:46:04.257089 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-flexvol-driver-host\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257183 kubelet[1543]: I0209 19:46:04.257132 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82172907-09c1-4046-9b3d-fe68160d689e-kubelet-dir\") pod \"csi-node-driver-r59lv\" (UID: \"82172907-09c1-4046-9b3d-fe68160d689e\") " pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:04.257183 kubelet[1543]: I0209 19:46:04.257170 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqq44\" (UniqueName: \"kubernetes.io/projected/82172907-09c1-4046-9b3d-fe68160d689e-kube-api-access-nqq44\") pod \"csi-node-driver-r59lv\" (UID: \"82172907-09c1-4046-9b3d-fe68160d689e\") " pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:04.257248 kubelet[1543]: I0209 19:46:04.257199 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-policysync\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257248 kubelet[1543]: I0209 19:46:04.257232 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-cni-net-dir\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257289 kubelet[1543]: I0209 19:46:04.257254 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-cni-log-dir\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257312 kubelet[1543]: I0209 19:46:04.257297 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c795f60b-8b03-4516-9cf7-42a0eb12582c-xtables-lock\") pod \"kube-proxy-cvzqs\" (UID: \"c795f60b-8b03-4516-9cf7-42a0eb12582c\") " pod="kube-system/kube-proxy-cvzqs" Feb 9 19:46:04.257335 kubelet[1543]: I0209 19:46:04.257329 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de365a7-378e-4967-bae2-fd7e4dedb3b2-xtables-lock\") pod \"calico-node-9dq54\" (UID: \"5de365a7-378e-4967-bae2-fd7e4dedb3b2\") " pod="calico-system/calico-node-9dq54" Feb 9 19:46:04.257379 kubelet[1543]: I0209 19:46:04.257358 1543 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:46:04.359408 kubelet[1543]: E0209 19:46:04.359376 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.359408 kubelet[1543]: W0209 19:46:04.359390 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.359553 kubelet[1543]: E0209 19:46:04.359419 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.359611 kubelet[1543]: E0209 19:46:04.359590 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.359611 kubelet[1543]: W0209 19:46:04.359609 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.359670 kubelet[1543]: E0209 19:46:04.359618 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.360913 kubelet[1543]: E0209 19:46:04.360899 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.360968 kubelet[1543]: W0209 19:46:04.360913 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.360968 kubelet[1543]: E0209 19:46:04.360943 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.459295 kubelet[1543]: E0209 19:46:04.459247 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.459295 kubelet[1543]: W0209 19:46:04.459260 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.459295 kubelet[1543]: E0209 19:46:04.459274 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.459574 kubelet[1543]: E0209 19:46:04.459543 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.459616 kubelet[1543]: W0209 19:46:04.459573 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.459616 kubelet[1543]: E0209 19:46:04.459605 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.460155 kubelet[1543]: E0209 19:46:04.460112 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.460155 kubelet[1543]: W0209 19:46:04.460123 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.460155 kubelet[1543]: E0209 19:46:04.460139 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.561295 kubelet[1543]: E0209 19:46:04.561263 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.561295 kubelet[1543]: W0209 19:46:04.561279 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.561295 kubelet[1543]: E0209 19:46:04.561294 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.561498 kubelet[1543]: E0209 19:46:04.561487 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.561523 kubelet[1543]: W0209 19:46:04.561498 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.561523 kubelet[1543]: E0209 19:46:04.561512 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.561712 kubelet[1543]: E0209 19:46:04.561703 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.561739 kubelet[1543]: W0209 19:46:04.561713 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.561739 kubelet[1543]: E0209 19:46:04.561725 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.662336 kubelet[1543]: E0209 19:46:04.662311 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.662336 kubelet[1543]: W0209 19:46:04.662328 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.662469 kubelet[1543]: E0209 19:46:04.662346 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.662566 kubelet[1543]: E0209 19:46:04.662553 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.662566 kubelet[1543]: W0209 19:46:04.662564 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.662641 kubelet[1543]: E0209 19:46:04.662575 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.662767 kubelet[1543]: E0209 19:46:04.662752 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.662767 kubelet[1543]: W0209 19:46:04.662762 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.662813 kubelet[1543]: E0209 19:46:04.662774 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.763004 kubelet[1543]: E0209 19:46:04.762436 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.763004 kubelet[1543]: W0209 19:46:04.762454 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.763004 kubelet[1543]: E0209 19:46:04.762471 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.763528 kubelet[1543]: E0209 19:46:04.763513 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.763528 kubelet[1543]: W0209 19:46:04.763523 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.763528 kubelet[1543]: E0209 19:46:04.763532 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.763710 kubelet[1543]: E0209 19:46:04.763689 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.763710 kubelet[1543]: W0209 19:46:04.763701 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.763710 kubelet[1543]: E0209 19:46:04.763709 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.864487 kubelet[1543]: E0209 19:46:04.864449 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.864487 kubelet[1543]: W0209 19:46:04.864472 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.864487 kubelet[1543]: E0209 19:46:04.864490 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.864712 kubelet[1543]: E0209 19:46:04.864640 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.864712 kubelet[1543]: W0209 19:46:04.864645 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.864712 kubelet[1543]: E0209 19:46:04.864654 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.962997 kubelet[1543]: E0209 19:46:04.962970 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.962997 kubelet[1543]: W0209 19:46:04.962988 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.963139 kubelet[1543]: E0209 19:46:04.963013 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:04.965157 kubelet[1543]: E0209 19:46:04.965142 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:04.965203 kubelet[1543]: W0209 19:46:04.965165 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:04.965203 kubelet[1543]: E0209 19:46:04.965191 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:05.064244 kubelet[1543]: E0209 19:46:05.063749 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:05.064391 env[1193]: time="2024-02-09T19:46:05.064339975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dq54,Uid:5de365a7-378e-4967-bae2-fd7e4dedb3b2,Namespace:calico-system,Attempt:0,}" Feb 9 19:46:05.066313 kubelet[1543]: E0209 19:46:05.066285 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:05.066313 kubelet[1543]: W0209 19:46:05.066301 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:05.066422 kubelet[1543]: E0209 19:46:05.066321 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:05.155398 kubelet[1543]: E0209 19:46:05.155367 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:05.165161 kubelet[1543]: E0209 19:46:05.165145 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:05.165161 kubelet[1543]: W0209 19:46:05.165159 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:05.165296 kubelet[1543]: E0209 19:46:05.165270 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:05.277671 kubelet[1543]: E0209 19:46:05.277641 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:05.358102 kubelet[1543]: E0209 19:46:05.357996 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:05.358371 env[1193]: time="2024-02-09T19:46:05.358323514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvzqs,Uid:c795f60b-8b03-4516-9cf7-42a0eb12582c,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:06.156281 kubelet[1543]: E0209 19:46:06.156239 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:06.325145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962970686.mount: Deactivated successfully. Feb 9 19:46:06.331239 env[1193]: time="2024-02-09T19:46:06.331193345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.332060 env[1193]: time="2024-02-09T19:46:06.332015526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.334242 env[1193]: time="2024-02-09T19:46:06.334195053Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.335861 env[1193]: time="2024-02-09T19:46:06.335829247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.340589 env[1193]: time="2024-02-09T19:46:06.340557022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.341878 env[1193]: time="2024-02-09T19:46:06.341854715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.342392 env[1193]: time="2024-02-09T19:46:06.342364220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.344821 env[1193]: time="2024-02-09T19:46:06.344774329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.361053 env[1193]: time="2024-02-09T19:46:06.360725518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:06.361053 env[1193]: time="2024-02-09T19:46:06.360779529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:06.361053 env[1193]: time="2024-02-09T19:46:06.360792854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:06.361053 env[1193]: time="2024-02-09T19:46:06.360949197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5b5d12e1a59babc986ad72fc1643c7b229d5c92cd5788fcd1a0e79715a5c5a1 pid=1662 runtime=io.containerd.runc.v2 Feb 9 19:46:06.364481 env[1193]: time="2024-02-09T19:46:06.364115684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:06.364481 env[1193]: time="2024-02-09T19:46:06.364150580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:06.364481 env[1193]: time="2024-02-09T19:46:06.364160769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:06.364481 env[1193]: time="2024-02-09T19:46:06.364286214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968 pid=1673 runtime=io.containerd.runc.v2 Feb 9 19:46:06.395576 env[1193]: time="2024-02-09T19:46:06.395529074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvzqs,Uid:c795f60b-8b03-4516-9cf7-42a0eb12582c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b5d12e1a59babc986ad72fc1643c7b229d5c92cd5788fcd1a0e79715a5c5a1\"" Feb 9 19:46:06.396242 kubelet[1543]: E0209 19:46:06.396221 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:06.397429 env[1193]: time="2024-02-09T19:46:06.397390324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:46:06.400876 env[1193]: time="2024-02-09T19:46:06.400821968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dq54,Uid:5de365a7-378e-4967-bae2-fd7e4dedb3b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\"" Feb 9 19:46:06.401398 kubelet[1543]: E0209 19:46:06.401376 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:07.157036 kubelet[1543]: E0209 19:46:07.157000 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:07.277999 kubelet[1543]: E0209 19:46:07.277949 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:07.397404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782510078.mount: Deactivated successfully. Feb 9 19:46:08.157205 kubelet[1543]: E0209 19:46:08.157137 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:08.191043 env[1193]: time="2024-02-09T19:46:08.190988861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:08.192767 env[1193]: time="2024-02-09T19:46:08.192726379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:08.194040 env[1193]: time="2024-02-09T19:46:08.194004706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:08.195259 env[1193]: time="2024-02-09T19:46:08.195228140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:08.195539 env[1193]: time="2024-02-09T19:46:08.195515689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:46:08.196222 env[1193]: time="2024-02-09T19:46:08.196192999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:46:08.197304 env[1193]: time="2024-02-09T19:46:08.197272733Z" level=info msg="CreateContainer within sandbox \"b5b5d12e1a59babc986ad72fc1643c7b229d5c92cd5788fcd1a0e79715a5c5a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:46:08.207885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846242634.mount: Deactivated successfully. Feb 9 19:46:08.209956 env[1193]: time="2024-02-09T19:46:08.209907283Z" level=info msg="CreateContainer within sandbox \"b5b5d12e1a59babc986ad72fc1643c7b229d5c92cd5788fcd1a0e79715a5c5a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3f4453cc22a218311e27b4d46e62b393ee90c1aa0e47ddb4609b41ff8c7c897\"" Feb 9 19:46:08.210701 env[1193]: time="2024-02-09T19:46:08.210641439Z" level=info msg="StartContainer for \"d3f4453cc22a218311e27b4d46e62b393ee90c1aa0e47ddb4609b41ff8c7c897\"" Feb 9 19:46:08.253556 env[1193]: time="2024-02-09T19:46:08.253481224Z" level=info msg="StartContainer for \"d3f4453cc22a218311e27b4d46e62b393ee90c1aa0e47ddb4609b41ff8c7c897\" returns successfully" Feb 9 19:46:08.299362 kernel: audit: type=1325 audit(1707507968.292:196): table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.299486 kernel: audit: type=1300 audit(1707507968.292:196): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe457725a0 a2=0 a3=7ffe4577258c items=0 ppid=1748 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.292000 audit[1788]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.292000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe457725a0 a2=0 a3=7ffe4577258c items=0 ppid=1748 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.301395 kernel: audit: type=1327 audit(1707507968.292:196): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:46:08.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:46:08.292000 audit[1787]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.303005 kernel: audit: type=1325 audit(1707507968.292:197): table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.303064 kernel: audit: type=1300 audit(1707507968.292:197): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9f7f5c10 a2=0 a3=31030 items=0 ppid=1748 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.292000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9f7f5c10 a2=0 a3=31030 items=0 ppid=1748 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:46:08.307593 kernel: audit: type=1327 audit(1707507968.292:197): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:46:08.307637 kernel: audit: type=1325 audit(1707507968.292:198): table=nat:37 family=10 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.292000 audit[1789]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.309064 kernel: audit: type=1300 audit(1707507968.292:198): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff15b05ac0 a2=0 a3=7fff15b05aac items=0 ppid=1748 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.292000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff15b05ac0 a2=0 a3=7fff15b05aac items=0 ppid=1748 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:46:08.312781 kubelet[1543]: E0209 19:46:08.312718 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:08.313828 kernel: audit: type=1327 audit(1707507968.292:198): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:46:08.313884 kernel: audit: type=1325 audit(1707507968.295:199): table=nat:38 family=2 entries=1 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.295000 audit[1790]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.295000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8f42c990 a2=0 a3=7ffc8f42c97c items=0 ppid=1748 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:46:08.295000 audit[1791]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.295000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa418f0f0 a2=0 a3=7fffa418f0dc items=0 ppid=1748 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:46:08.298000 audit[1792]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.298000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe02789d00 a2=0 a3=7ffe02789cec items=0 ppid=1748 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:46:08.327670 kubelet[1543]: I0209 19:46:08.327637 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cvzqs" podStartSLOduration=-9.22337202452717e+09 pod.CreationTimestamp="2024-02-09 19:45:56 +0000 UTC" firstStartedPulling="2024-02-09 19:46:06.396912628 +0000 UTC m=+23.406858694" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:08.327541517 +0000 UTC m=+25.337487583" watchObservedRunningTime="2024-02-09 19:46:08.327604675 +0000 UTC m=+25.337550742" Feb 9 19:46:08.366936 kubelet[1543]: E0209 19:46:08.366891 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.366936 kubelet[1543]: W0209 19:46:08.366911 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.366936 kubelet[1543]: E0209 19:46:08.366928 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367097 kubelet[1543]: E0209 19:46:08.367052 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367097 kubelet[1543]: W0209 19:46:08.367058 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367097 kubelet[1543]: E0209 19:46:08.367066 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367196 kubelet[1543]: E0209 19:46:08.367171 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367196 kubelet[1543]: W0209 19:46:08.367176 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367196 kubelet[1543]: E0209 19:46:08.367184 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367338 kubelet[1543]: E0209 19:46:08.367316 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367338 kubelet[1543]: W0209 19:46:08.367325 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367338 kubelet[1543]: E0209 19:46:08.367333 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367446 kubelet[1543]: E0209 19:46:08.367438 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367446 kubelet[1543]: W0209 19:46:08.367444 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367513 kubelet[1543]: E0209 19:46:08.367452 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367554 kubelet[1543]: E0209 19:46:08.367548 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367554 kubelet[1543]: W0209 19:46:08.367554 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367618 kubelet[1543]: E0209 19:46:08.367561 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367713 kubelet[1543]: E0209 19:46:08.367670 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367713 kubelet[1543]: W0209 19:46:08.367694 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367713 kubelet[1543]: E0209 19:46:08.367702 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367827 kubelet[1543]: E0209 19:46:08.367810 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367827 kubelet[1543]: W0209 19:46:08.367816 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.367827 kubelet[1543]: E0209 19:46:08.367823 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.367930 kubelet[1543]: E0209 19:46:08.367926 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.367930 kubelet[1543]: W0209 19:46:08.367931 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368009 kubelet[1543]: E0209 19:46:08.367939 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368099 kubelet[1543]: E0209 19:46:08.368076 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368099 kubelet[1543]: W0209 19:46:08.368085 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368099 kubelet[1543]: E0209 19:46:08.368093 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368211 kubelet[1543]: E0209 19:46:08.368196 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368211 kubelet[1543]: W0209 19:46:08.368201 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368211 kubelet[1543]: E0209 19:46:08.368209 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368335 kubelet[1543]: E0209 19:46:08.368320 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368335 kubelet[1543]: W0209 19:46:08.368329 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368335 kubelet[1543]: E0209 19:46:08.368337 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368457 kubelet[1543]: E0209 19:46:08.368444 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368457 kubelet[1543]: W0209 19:46:08.368453 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368457 kubelet[1543]: E0209 19:46:08.368460 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368577 kubelet[1543]: E0209 19:46:08.368564 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368577 kubelet[1543]: W0209 19:46:08.368573 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368577 kubelet[1543]: E0209 19:46:08.368580 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368732 kubelet[1543]: E0209 19:46:08.368714 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368732 kubelet[1543]: W0209 19:46:08.368723 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368732 kubelet[1543]: E0209 19:46:08.368732 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.368862 kubelet[1543]: E0209 19:46:08.368840 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.368862 kubelet[1543]: W0209 19:46:08.368846 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.368862 kubelet[1543]: E0209 19:46:08.368853 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.382354 kubelet[1543]: E0209 19:46:08.382320 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.382354 kubelet[1543]: W0209 19:46:08.382340 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.382354 kubelet[1543]: E0209 19:46:08.382368 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.382603 kubelet[1543]: E0209 19:46:08.382583 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.382603 kubelet[1543]: W0209 19:46:08.382595 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.382673 kubelet[1543]: E0209 19:46:08.382611 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.382838 kubelet[1543]: E0209 19:46:08.382817 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.382838 kubelet[1543]: W0209 19:46:08.382827 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.382838 kubelet[1543]: E0209 19:46:08.382841 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.383004 kubelet[1543]: E0209 19:46:08.382985 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.383004 kubelet[1543]: W0209 19:46:08.382994 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.383004 kubelet[1543]: E0209 19:46:08.383007 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.383253 kubelet[1543]: E0209 19:46:08.383233 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.383253 kubelet[1543]: W0209 19:46:08.383243 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.383253 kubelet[1543]: E0209 19:46:08.383255 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.383432 kubelet[1543]: E0209 19:46:08.383420 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.383432 kubelet[1543]: W0209 19:46:08.383428 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.383485 kubelet[1543]: E0209 19:46:08.383441 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.383726 kubelet[1543]: E0209 19:46:08.383702 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.383726 kubelet[1543]: W0209 19:46:08.383723 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.383801 kubelet[1543]: E0209 19:46:08.383752 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.383923 kubelet[1543]: E0209 19:46:08.383908 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.383923 kubelet[1543]: W0209 19:46:08.383918 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.383993 kubelet[1543]: E0209 19:46:08.383936 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.384163 kubelet[1543]: E0209 19:46:08.384147 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.384163 kubelet[1543]: W0209 19:46:08.384158 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.384233 kubelet[1543]: E0209 19:46:08.384176 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.384384 kubelet[1543]: E0209 19:46:08.384366 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.384384 kubelet[1543]: W0209 19:46:08.384377 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.384460 kubelet[1543]: E0209 19:46:08.384392 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.384566 kubelet[1543]: E0209 19:46:08.384551 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.384566 kubelet[1543]: W0209 19:46:08.384565 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.384615 kubelet[1543]: E0209 19:46:08.384584 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.384804 kubelet[1543]: E0209 19:46:08.384788 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:08.384804 kubelet[1543]: W0209 19:46:08.384800 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:08.384875 kubelet[1543]: E0209 19:46:08.384814 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:08.397000 audit[1821]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.397000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffece912660 a2=0 a3=7ffece91264c items=0 ppid=1748 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:46:08.399000 audit[1823]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.399000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc7dd2c990 a2=0 a3=7ffc7dd2c97c items=0 ppid=1748 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:46:08.402000 audit[1826]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.402000 audit[1826]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd499d0e40 a2=0 a3=7ffd499d0e2c items=0 ppid=1748 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:46:08.403000 audit[1827]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.403000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01fd3830 a2=0 a3=7ffe01fd381c items=0 ppid=1748 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:46:08.405000 audit[1829]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.405000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa088b030 a2=0 a3=7fffa088b01c items=0 ppid=1748 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:46:08.407000 audit[1830]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.407000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeef32b720 a2=0 a3=7ffeef32b70c items=0 ppid=1748 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:46:08.410000 audit[1832]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.410000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5f732db0 a2=0 a3=7ffd5f732d9c items=0 ppid=1748 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:46:08.413000 audit[1835]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.413000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc68b47430 a2=0 a3=7ffc68b4741c items=0 ppid=1748 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:46:08.414000 audit[1836]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.414000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11370870 a2=0 a3=7fff1137085c items=0 ppid=1748 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:46:08.416000 audit[1838]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.416000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe90bdb80 a2=0 a3=7fffe90bdb6c items=0 ppid=1748 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:46:08.417000 audit[1839]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.417000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbecb50d0 a2=0 a3=7ffcbecb50bc items=0 ppid=1748 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:46:08.419000 audit[1841]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.419000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff1801ba0 a2=0 a3=7ffff1801b8c items=0 ppid=1748 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:46:08.423000 audit[1844]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.423000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd7ae7190 a2=0 a3=7ffdd7ae717c items=0 ppid=1748 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:46:08.426000 audit[1847]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.426000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda42c7980 a2=0 a3=7ffda42c796c items=0 ppid=1748 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:46:08.427000 audit[1848]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.427000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff85655a80 a2=0 a3=7fff85655a6c items=0 ppid=1748 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:46:08.429000 audit[1850]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.429000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc9239aed0 a2=0 a3=7ffc9239aebc items=0 ppid=1748 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:46:08.433000 audit[1853]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:46:08.433000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea936f1e0 a2=0 a3=7ffea936f1cc items=0 ppid=1748 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:46:08.442000 audit[1857]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:08.442000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc0b009210 a2=0 a3=7ffc0b0091fc items=0 ppid=1748 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:08.449000 audit[1857]: NETFILTER_CFG table=nat:59 family=2 entries=24 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:08.449000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc0b009210 a2=0 a3=7ffc0b0091fc items=0 ppid=1748 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:08.454000 audit[1863]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.454000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd8a40f60 a2=0 a3=7ffdd8a40f4c items=0 ppid=1748 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:46:08.456000 audit[1865]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.456000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb184b340 a2=0 a3=7ffdb184b32c items=0 ppid=1748 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:46:08.459000 audit[1868]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.459000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffbb2a2fa0 a2=0 a3=7fffbb2a2f8c items=0 ppid=1748 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:46:08.460000 audit[1869]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.460000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdedb9a00 a2=0 a3=7ffcdedb99ec items=0 ppid=1748 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:46:08.462000 audit[1871]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.462000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0248dbf0 a2=0 a3=7ffe0248dbdc items=0 ppid=1748 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:46:08.463000 audit[1872]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.463000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd517a7c40 a2=0 a3=7ffd517a7c2c items=0 ppid=1748 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:46:08.465000 audit[1874]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.465000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb7a7b890 a2=0 a3=7fffb7a7b87c items=0 ppid=1748 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:46:08.467000 audit[1877]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.467000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe26b74030 a2=0 a3=7ffe26b7401c items=0 ppid=1748 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:46:08.468000 audit[1878]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.468000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb00cc440 a2=0 a3=7ffcb00cc42c items=0 ppid=1748 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:46:08.470000 audit[1880]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.470000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdb1b58d0 a2=0 a3=7fffdb1b58bc items=0 ppid=1748 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:46:08.471000 audit[1881]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.471000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2f206fe0 a2=0 a3=7ffe2f206fcc items=0 ppid=1748 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:46:08.473000 audit[1883]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.473000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf0f227b0 a2=0 a3=7ffdf0f2279c items=0 ppid=1748 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:46:08.475000 audit[1886]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.475000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc603f4440 a2=0 a3=7ffc603f442c items=0 ppid=1748 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:46:08.478000 audit[1889]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.478000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfeb46290 a2=0 a3=7ffdfeb4627c items=0 ppid=1748 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:46:08.479000 audit[1890]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.479000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc5ddd5d0 a2=0 a3=7fffc5ddd5bc items=0 ppid=1748 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:46:08.480000 audit[1892]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.480000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe4a3ec080 a2=0 a3=7ffe4a3ec06c items=0 ppid=1748 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:46:08.483000 audit[1895]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:46:08.483000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc5c1d9d90 a2=0 a3=7ffc5c1d9d7c items=0 ppid=1748 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:46:08.487000 audit[1899]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:46:08.487000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcd0cb24b0 a2=0 a3=7ffcd0cb249c items=0 ppid=1748 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.487000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:08.487000 audit[1899]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:46:08.487000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffcd0cb24b0 a2=0 a3=7ffcd0cb249c items=0 ppid=1748 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:08.487000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:09.158105 kubelet[1543]: E0209 19:46:09.158055 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:09.277773 kubelet[1543]: E0209 19:46:09.277732 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:09.313775 kubelet[1543]: E0209 19:46:09.313747 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:09.375505 kubelet[1543]: E0209 19:46:09.375477 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.375505 kubelet[1543]: W0209 19:46:09.375493 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.375505 kubelet[1543]: E0209 19:46:09.375510 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.375670 kubelet[1543]: E0209 19:46:09.375651 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.375670 kubelet[1543]: W0209 19:46:09.375660 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.375670 kubelet[1543]: E0209 19:46:09.375668 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.375823 kubelet[1543]: E0209 19:46:09.375809 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.375823 kubelet[1543]: W0209 19:46:09.375817 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.375823 kubelet[1543]: E0209 19:46:09.375825 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.375963 kubelet[1543]: E0209 19:46:09.375949 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.375963 kubelet[1543]: W0209 19:46:09.375957 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.375963 kubelet[1543]: E0209 19:46:09.375964 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376069 kubelet[1543]: E0209 19:46:09.376057 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376069 kubelet[1543]: W0209 19:46:09.376065 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376119 kubelet[1543]: E0209 19:46:09.376072 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376181 kubelet[1543]: E0209 19:46:09.376164 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376181 kubelet[1543]: W0209 19:46:09.376175 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376226 kubelet[1543]: E0209 19:46:09.376184 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376332 kubelet[1543]: E0209 19:46:09.376316 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376332 kubelet[1543]: W0209 19:46:09.376325 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376332 kubelet[1543]: E0209 19:46:09.376334 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376441 kubelet[1543]: E0209 19:46:09.376431 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376441 kubelet[1543]: W0209 19:46:09.376438 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376486 kubelet[1543]: E0209 19:46:09.376446 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376541 kubelet[1543]: E0209 19:46:09.376531 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376541 kubelet[1543]: W0209 19:46:09.376540 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376590 kubelet[1543]: E0209 19:46:09.376548 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376773 kubelet[1543]: E0209 19:46:09.376758 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376773 kubelet[1543]: W0209 19:46:09.376767 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376773 kubelet[1543]: E0209 19:46:09.376775 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.376881 kubelet[1543]: E0209 19:46:09.376869 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.376881 kubelet[1543]: W0209 19:46:09.376877 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.376936 kubelet[1543]: E0209 19:46:09.376884 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.377007 kubelet[1543]: E0209 19:46:09.376997 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.377007 kubelet[1543]: W0209 19:46:09.377005 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.377050 kubelet[1543]: E0209 19:46:09.377012 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.377119 kubelet[1543]: E0209 19:46:09.377109 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.377119 kubelet[1543]: W0209 19:46:09.377116 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.377166 kubelet[1543]: E0209 19:46:09.377125 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.377242 kubelet[1543]: E0209 19:46:09.377230 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.377242 kubelet[1543]: W0209 19:46:09.377240 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.377293 kubelet[1543]: E0209 19:46:09.377248 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.377384 kubelet[1543]: E0209 19:46:09.377370 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.377384 kubelet[1543]: W0209 19:46:09.377378 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.377384 kubelet[1543]: E0209 19:46:09.377385 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.377497 kubelet[1543]: E0209 19:46:09.377486 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.377497 kubelet[1543]: W0209 19:46:09.377493 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.377542 kubelet[1543]: E0209 19:46:09.377500 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.387766 kubelet[1543]: E0209 19:46:09.387747 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.387766 kubelet[1543]: W0209 19:46:09.387759 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.387766 kubelet[1543]: E0209 19:46:09.387769 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.387918 kubelet[1543]: E0209 19:46:09.387904 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.387918 kubelet[1543]: W0209 19:46:09.387913 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.387981 kubelet[1543]: E0209 19:46:09.387926 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388081 kubelet[1543]: E0209 19:46:09.388067 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388081 kubelet[1543]: W0209 19:46:09.388076 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388150 kubelet[1543]: E0209 19:46:09.388089 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388234 kubelet[1543]: E0209 19:46:09.388216 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388234 kubelet[1543]: W0209 19:46:09.388225 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388280 kubelet[1543]: E0209 19:46:09.388237 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388365 kubelet[1543]: E0209 19:46:09.388355 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388365 kubelet[1543]: W0209 19:46:09.388362 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388411 kubelet[1543]: E0209 19:46:09.388374 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388591 kubelet[1543]: E0209 19:46:09.388565 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388616 kubelet[1543]: W0209 19:46:09.388588 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388616 kubelet[1543]: E0209 19:46:09.388613 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388763 kubelet[1543]: E0209 19:46:09.388752 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388763 kubelet[1543]: W0209 19:46:09.388760 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388815 kubelet[1543]: E0209 19:46:09.388774 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.388905 kubelet[1543]: E0209 19:46:09.388895 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.388905 kubelet[1543]: W0209 19:46:09.388903 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.388949 kubelet[1543]: E0209 19:46:09.388918 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.389074 kubelet[1543]: E0209 19:46:09.389060 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.389074 kubelet[1543]: W0209 19:46:09.389068 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.389141 kubelet[1543]: E0209 19:46:09.389079 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.389222 kubelet[1543]: E0209 19:46:09.389208 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.389222 kubelet[1543]: W0209 19:46:09.389219 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.389266 kubelet[1543]: E0209 19:46:09.389231 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.389370 kubelet[1543]: E0209 19:46:09.389359 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.389370 kubelet[1543]: W0209 19:46:09.389368 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.389418 kubelet[1543]: E0209 19:46:09.389377 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.389501 kubelet[1543]: E0209 19:46:09.389490 1543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:46:09.389501 kubelet[1543]: W0209 19:46:09.389499 1543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:46:09.389546 kubelet[1543]: E0209 19:46:09.389508 1543 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:46:09.426079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303144254.mount: Deactivated successfully. Feb 9 19:46:10.158896 kubelet[1543]: E0209 19:46:10.158848 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:10.283063 env[1193]: time="2024-02-09T19:46:10.283007916Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:10.285772 env[1193]: time="2024-02-09T19:46:10.285738205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:10.287875 env[1193]: time="2024-02-09T19:46:10.287849644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:10.289962 env[1193]: time="2024-02-09T19:46:10.289935115Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:10.290587 env[1193]: time="2024-02-09T19:46:10.290558253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:46:10.292125 env[1193]: time="2024-02-09T19:46:10.292103540Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:46:10.304356 env[1193]: time="2024-02-09T19:46:10.304286754Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3\"" Feb 9 19:46:10.304809 env[1193]: time="2024-02-09T19:46:10.304763518Z" level=info msg="StartContainer for \"5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3\"" Feb 9 19:46:10.354632 env[1193]: time="2024-02-09T19:46:10.354580104Z" level=info msg="StartContainer for \"5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3\" returns successfully" Feb 9 19:46:10.424467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3-rootfs.mount: Deactivated successfully. Feb 9 19:46:10.903164 env[1193]: time="2024-02-09T19:46:10.903109096Z" level=info msg="shim disconnected" id=5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3 Feb 9 19:46:10.903164 env[1193]: time="2024-02-09T19:46:10.903161764Z" level=warning msg="cleaning up after shim disconnected" id=5cdbef995256c879565ed0a270fd4a96484b61606d43bc2cf7fb452d5f85d9d3 namespace=k8s.io Feb 9 19:46:10.903345 env[1193]: time="2024-02-09T19:46:10.903174148Z" level=info msg="cleaning up dead shim" Feb 9 19:46:10.909989 env[1193]: time="2024-02-09T19:46:10.909934213Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1973 runtime=io.containerd.runc.v2\n" Feb 9 19:46:11.159156 kubelet[1543]: E0209 19:46:11.158995 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:11.276993 kubelet[1543]: E0209 19:46:11.276955 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:11.318632 kubelet[1543]: E0209 19:46:11.318605 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:11.319225 env[1193]: time="2024-02-09T19:46:11.319198233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:46:12.159622 kubelet[1543]: E0209 19:46:12.159582 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:12.230000 audit[2015]: NETFILTER_CFG table=filter:79 family=2 entries=12 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.230000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcdea8fdc0 a2=0 a3=7ffcdea8fdac items=0 ppid=1748 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.231000 audit[2015]: NETFILTER_CFG table=nat:80 family=2 entries=30 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.231000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcdea8fdc0 a2=0 a3=7ffcdea8fdac items=0 ppid=1748 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.264000 audit[2041]: NETFILTER_CFG table=filter:81 family=2 entries=9 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.264000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc4d4a6380 a2=0 a3=7ffc4d4a636c items=0 ppid=1748 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.265000 audit[2041]: NETFILTER_CFG table=nat:82 family=2 entries=51 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.265000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc4d4a6380 a2=0 a3=7ffc4d4a636c items=0 ppid=1748 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.676758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272637413.mount: Deactivated successfully. Feb 9 19:46:13.160177 kubelet[1543]: E0209 19:46:13.160146 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:13.277670 kubelet[1543]: E0209 19:46:13.277317 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:13.317128 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 9 19:46:13.317249 kernel: audit: type=1325 audit(1707507973.304:244): table=filter:83 family=2 entries=6 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:13.317273 kernel: audit: type=1300 audit(1707507973.304:244): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffde6d3ed90 a2=0 a3=7ffde6d3ed7c items=0 ppid=1748 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:13.304000 audit[2070]: NETFILTER_CFG table=filter:83 family=2 entries=6 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:13.304000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffde6d3ed90 a2=0 a3=7ffde6d3ed7c items=0 ppid=1748 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:13.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:13.320832 kernel: audit: type=1327 audit(1707507973.304:244): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:13.323000 audit[2070]: NETFILTER_CFG table=nat:84 family=2 entries=72 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:13.323000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffde6d3ed90 a2=0 a3=7ffde6d3ed7c items=0 ppid=1748 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:13.335980 kernel: audit: type=1325 audit(1707507973.323:245): table=nat:84 family=2 entries=72 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:13.336025 kernel: audit: type=1300 audit(1707507973.323:245): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffde6d3ed90 a2=0 a3=7ffde6d3ed7c items=0 ppid=1748 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:13.336043 kernel: audit: type=1327 audit(1707507973.323:245): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:13.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:14.160277 kubelet[1543]: E0209 19:46:14.160235 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:15.160384 kubelet[1543]: E0209 19:46:15.160340 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:15.277758 kubelet[1543]: E0209 19:46:15.277731 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:16.161401 kubelet[1543]: E0209 19:46:16.161350 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:16.278306 env[1193]: time="2024-02-09T19:46:16.278242796Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:16.280138 env[1193]: time="2024-02-09T19:46:16.280099854Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:16.281738 env[1193]: time="2024-02-09T19:46:16.281711645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:16.283250 env[1193]: time="2024-02-09T19:46:16.283217533Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:16.283942 env[1193]: time="2024-02-09T19:46:16.283896768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:46:16.285429 env[1193]: time="2024-02-09T19:46:16.285401474Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:46:16.298351 env[1193]: time="2024-02-09T19:46:16.298304656Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e\"" Feb 9 19:46:16.298672 env[1193]: time="2024-02-09T19:46:16.298642219Z" level=info msg="StartContainer for \"c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e\"" Feb 9 19:46:16.345162 env[1193]: time="2024-02-09T19:46:16.345115184Z" level=info msg="StartContainer for \"c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e\" returns successfully" Feb 9 19:46:17.161603 kubelet[1543]: E0209 19:46:17.161539 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:17.277128 kubelet[1543]: E0209 19:46:17.277075 1543 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:17.329446 kubelet[1543]: E0209 19:46:17.329409 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:18.161899 kubelet[1543]: E0209 19:46:18.161855 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:18.330305 kubelet[1543]: E0209 19:46:18.330277 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:18.814169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e-rootfs.mount: Deactivated successfully. Feb 9 19:46:18.892014 kubelet[1543]: I0209 19:46:18.891987 1543 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:46:19.072234 env[1193]: time="2024-02-09T19:46:19.072093994Z" level=info msg="shim disconnected" id=c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e Feb 9 19:46:19.072234 env[1193]: time="2024-02-09T19:46:19.072160411Z" level=warning msg="cleaning up after shim disconnected" id=c598b8f875d934aefbbb14647d922aebd176c9290b77b5eb76619ea32642954e namespace=k8s.io Feb 9 19:46:19.072234 env[1193]: time="2024-02-09T19:46:19.072172884Z" level=info msg="cleaning up dead shim" Feb 9 19:46:19.078368 env[1193]: time="2024-02-09T19:46:19.078312408Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2128 runtime=io.containerd.runc.v2\n" Feb 9 19:46:19.162870 kubelet[1543]: E0209 19:46:19.162822 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:19.279950 env[1193]: time="2024-02-09T19:46:19.279901353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r59lv,Uid:82172907-09c1-4046-9b3d-fe68160d689e,Namespace:calico-system,Attempt:0,}" Feb 9 19:46:19.332902 env[1193]: time="2024-02-09T19:46:19.332769703Z" level=error msg="Failed to destroy network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:19.333944 kubelet[1543]: E0209 19:46:19.333915 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:19.334334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d-shm.mount: Deactivated successfully. Feb 9 19:46:19.335064 env[1193]: time="2024-02-09T19:46:19.335024617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:46:19.335128 env[1193]: time="2024-02-09T19:46:19.335093428Z" level=error msg="encountered an error cleaning up failed sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:19.335166 env[1193]: time="2024-02-09T19:46:19.335147260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r59lv,Uid:82172907-09c1-4046-9b3d-fe68160d689e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:19.335338 kubelet[1543]: E0209 19:46:19.335318 1543 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:19.335432 kubelet[1543]: E0209 19:46:19.335360 1543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:19.335432 kubelet[1543]: E0209 19:46:19.335378 1543 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r59lv" Feb 9 19:46:19.335432 kubelet[1543]: E0209 19:46:19.335426 1543 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r59lv_calico-system(82172907-09c1-4046-9b3d-fe68160d689e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r59lv_calico-system(82172907-09c1-4046-9b3d-fe68160d689e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:20.163157 kubelet[1543]: E0209 19:46:20.163102 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:20.336168 kubelet[1543]: I0209 19:46:20.336133 1543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:20.336808 env[1193]: time="2024-02-09T19:46:20.336765512Z" level=info msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" Feb 9 19:46:20.357939 env[1193]: time="2024-02-09T19:46:20.357886866Z" level=error msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" failed" error="failed to destroy network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:20.358138 kubelet[1543]: E0209 19:46:20.358105 1543 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:20.358195 kubelet[1543]: E0209 19:46:20.358160 1543 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d} Feb 9 19:46:20.358195 kubelet[1543]: E0209 19:46:20.358189 1543 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82172907-09c1-4046-9b3d-fe68160d689e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:46:20.358289 kubelet[1543]: E0209 19:46:20.358213 1543 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82172907-09c1-4046-9b3d-fe68160d689e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r59lv" podUID=82172907-09c1-4046-9b3d-fe68160d689e Feb 9 19:46:21.163828 kubelet[1543]: E0209 19:46:21.163782 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:22.164869 kubelet[1543]: E0209 19:46:22.164811 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:23.142057 kubelet[1543]: E0209 19:46:23.142003 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:23.165532 kubelet[1543]: E0209 19:46:23.165495 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:24.166213 kubelet[1543]: E0209 19:46:24.166160 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:24.231129 kubelet[1543]: I0209 19:46:24.231097 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:24.262932 kubelet[1543]: I0209 19:46:24.262902 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzlfr\" (UniqueName: \"kubernetes.io/projected/c70a3502-9740-4aed-bb80-e14b379ad2bd-kube-api-access-lzlfr\") pod \"nginx-deployment-8ffc5cf85-9hsxq\" (UID: \"c70a3502-9740-4aed-bb80-e14b379ad2bd\") " pod="default/nginx-deployment-8ffc5cf85-9hsxq" Feb 9 19:46:24.534827 env[1193]: time="2024-02-09T19:46:24.534730001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9hsxq,Uid:c70a3502-9740-4aed-bb80-e14b379ad2bd,Namespace:default,Attempt:0,}" Feb 9 19:46:25.099566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492915930.mount: Deactivated successfully. Feb 9 19:46:25.166455 kubelet[1543]: E0209 19:46:25.166418 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:25.260796 update_engine[1180]: I0209 19:46:25.260746 1180 update_attempter.cc:509] Updating boot flags... Feb 9 19:46:26.166587 kubelet[1543]: E0209 19:46:26.166528 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:26.201476 env[1193]: time="2024-02-09T19:46:26.201414818Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:26.204013 env[1193]: time="2024-02-09T19:46:26.203964177Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:26.205926 env[1193]: time="2024-02-09T19:46:26.205887124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:26.208880 env[1193]: time="2024-02-09T19:46:26.208842472Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:26.209328 env[1193]: time="2024-02-09T19:46:26.209288225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:46:26.221874 env[1193]: time="2024-02-09T19:46:26.221830599Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:46:26.228857 env[1193]: time="2024-02-09T19:46:26.228797041Z" level=error msg="Failed to destroy network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:26.229149 env[1193]: time="2024-02-09T19:46:26.229122567Z" level=error msg="encountered an error cleaning up failed sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:26.229204 env[1193]: time="2024-02-09T19:46:26.229166921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9hsxq,Uid:c70a3502-9740-4aed-bb80-e14b379ad2bd,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:26.229884 kubelet[1543]: E0209 19:46:26.229854 1543 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:26.229954 kubelet[1543]: E0209 19:46:26.229903 1543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-9hsxq" Feb 9 19:46:26.229954 kubelet[1543]: E0209 19:46:26.229923 1543 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-9hsxq" Feb 9 19:46:26.230026 kubelet[1543]: E0209 19:46:26.229984 1543 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-9hsxq_default(c70a3502-9740-4aed-bb80-e14b379ad2bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-9hsxq_default(c70a3502-9740-4aed-bb80-e14b379ad2bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-9hsxq" podUID=c70a3502-9740-4aed-bb80-e14b379ad2bd Feb 9 19:46:26.234880 env[1193]: time="2024-02-09T19:46:26.234840549Z" level=info msg="CreateContainer within sandbox \"955a0a004f212dbb3e882e104aa5e20b9992340f88414d9beb046608e2095968\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"19851769a81de66f201e7426dac15bf1b3f74fd399ae5a00447e65f429833759\"" Feb 9 19:46:26.235444 env[1193]: time="2024-02-09T19:46:26.235391610Z" level=info msg="StartContainer for \"19851769a81de66f201e7426dac15bf1b3f74fd399ae5a00447e65f429833759\"" Feb 9 19:46:26.292979 env[1193]: time="2024-02-09T19:46:26.292901138Z" level=info msg="StartContainer for \"19851769a81de66f201e7426dac15bf1b3f74fd399ae5a00447e65f429833759\" returns successfully" Feb 9 19:46:26.347804 kubelet[1543]: E0209 19:46:26.347773 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:26.358050 kubelet[1543]: I0209 19:46:26.357593 1543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:26.358349 env[1193]: time="2024-02-09T19:46:26.358276519Z" level=info msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" Feb 9 19:46:26.360427 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:46:26.360497 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:46:26.385560 env[1193]: time="2024-02-09T19:46:26.385468022Z" level=error msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" failed" error="failed to destroy network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:46:26.385781 kubelet[1543]: E0209 19:46:26.385760 1543 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:26.385832 kubelet[1543]: E0209 19:46:26.385800 1543 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf} Feb 9 19:46:26.385859 kubelet[1543]: E0209 19:46:26.385832 1543 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c70a3502-9740-4aed-bb80-e14b379ad2bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:46:26.385922 kubelet[1543]: E0209 19:46:26.385860 1543 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c70a3502-9740-4aed-bb80-e14b379ad2bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-9hsxq" podUID=c70a3502-9740-4aed-bb80-e14b379ad2bd Feb 9 19:46:26.431198 kubelet[1543]: I0209 19:46:26.431084 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9dq54" podStartSLOduration=-9.22337200642376e+09 pod.CreationTimestamp="2024-02-09 19:45:56 +0000 UTC" firstStartedPulling="2024-02-09 19:46:06.402099564 +0000 UTC m=+23.412045630" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:26.430877792 +0000 UTC m=+43.440823858" watchObservedRunningTime="2024-02-09 19:46:26.431017175 +0000 UTC m=+43.440963241" Feb 9 19:46:27.166974 kubelet[1543]: E0209 19:46:27.166920 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:27.182053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf-shm.mount: Deactivated successfully. Feb 9 19:46:27.359660 kubelet[1543]: E0209 19:46:27.359634 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:27.980808 kernel: audit: type=1400 audit(1707507987.973:246): avc: denied { write } for pid=2471 comm="tee" name="fd" dev="proc" ino=19267 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.980989 kernel: audit: type=1300 audit(1707507987.973:246): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0012498f a2=241 a3=1b6 items=1 ppid=2421 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.973000 audit[2471]: AVC avc: denied { write } for pid=2471 comm="tee" name="fd" dev="proc" ino=19267 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.973000 audit[2471]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0012498f a2=241 a3=1b6 items=1 ppid=2421 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.982453 kernel: audit: type=1307 audit(1707507987.973:246): cwd="/etc/service/enabled/felix/log" Feb 9 19:46:27.973000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:46:27.973000 audit: PATH item=0 name="/dev/fd/63" inode=19261 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.985712 kernel: audit: type=1302 audit(1707507987.973:246): item=0 name="/dev/fd/63" inode=19261 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.975000 audit[2476]: AVC avc: denied { write } for pid=2476 comm="tee" name="fd" dev="proc" ino=19271 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.993209 kernel: audit: type=1327 audit(1707507987.973:246): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.993344 kernel: audit: type=1400 audit(1707507987.975:247): avc: denied { write } for pid=2476 comm="tee" name="fd" dev="proc" ino=19271 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.993372 kernel: audit: type=1300 audit(1707507987.975:247): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda9f1198f a2=241 a3=1b6 items=1 ppid=2429 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.975000 audit[2476]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda9f1198f a2=241 a3=1b6 items=1 ppid=2429 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.975000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:46:28.004713 kernel: audit: type=1307 audit(1707507987.975:247): cwd="/etc/service/enabled/bird6/log" Feb 9 19:46:27.975000 audit: PATH item=0 name="/dev/fd/63" inode=19264 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:28.015373 kernel: audit: type=1302 audit(1707507987.975:247): item=0 name="/dev/fd/63" inode=19264 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:28.015452 kernel: audit: type=1327 audit(1707507987.975:247): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.988000 audit[2474]: AVC avc: denied { write } for pid=2474 comm="tee" name="fd" dev="proc" ino=20377 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.988000 audit[2474]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeeaf27980 a2=241 a3=1b6 items=1 ppid=2431 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.988000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:46:27.988000 audit: PATH item=0 name="/dev/fd/63" inode=21649 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.991000 audit[2483]: AVC avc: denied { write } for pid=2483 comm="tee" name="fd" dev="proc" ino=20381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.991000 audit[2483]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff846e297f a2=241 a3=1b6 items=1 ppid=2425 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.991000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:46:27.991000 audit: PATH item=0 name="/dev/fd/63" inode=20371 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.993000 audit[2494]: AVC avc: denied { write } for pid=2494 comm="tee" name="fd" dev="proc" ino=20794 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.993000 audit[2494]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd74845990 a2=241 a3=1b6 items=1 ppid=2422 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.993000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:46:27.993000 audit: PATH item=0 name="/dev/fd/63" inode=20791 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:27.993000 audit[2491]: AVC avc: denied { write } for pid=2491 comm="tee" name="fd" dev="proc" ino=20798 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:27.993000 audit[2491]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff06dd598f a2=241 a3=1b6 items=1 ppid=2434 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:27.993000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:46:27.993000 audit: PATH item=0 name="/dev/fd/63" inode=20790 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:27.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:28.005000 audit[2496]: AVC avc: denied { write } for pid=2496 comm="tee" name="fd" dev="proc" ino=20802 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:28.005000 audit[2496]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe23287991 a2=241 a3=1b6 items=1 ppid=2437 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.005000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:46:28.005000 audit: PATH item=0 name="/dev/fd/63" inode=20385 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:28.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:28.079733 kernel: Initializing XFRM netlink socket Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.157000 audit: BPF prog-id=10 op=LOAD Feb 9 19:46:28.157000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6dc338c0 a2=70 a3=7f8354f9f000 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.158000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit: BPF prog-id=11 op=LOAD Feb 9 19:46:28.158000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6dc338c0 a2=70 a3=6e items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.158000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.158000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:46:28.158000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.158000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd6dc33870 a2=70 a3=7ffd6dc338c0 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.158000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit: BPF prog-id=12 op=LOAD Feb 9 19:46:28.159000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd6dc33850 a2=70 a3=7ffd6dc338c0 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.159000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dc33930 a2=70 a3=0 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dc33920 a2=70 a3=0 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.159000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.159000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffd6dc33960 a2=70 a3=0 items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { perfmon } for pid=2565 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit[2565]: AVC avc: denied { bpf } for pid=2565 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.160000 audit: BPF prog-id=13 op=LOAD Feb 9 19:46:28.160000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd6dc33880 a2=70 a3=ffffffff items=0 ppid=2423 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.160000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:28.163000 audit[2570]: AVC avc: denied { bpf } for pid=2570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.163000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd380df6c0 a2=70 a3=208 items=0 ppid=2423 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.163000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:46:28.163000 audit[2570]: AVC avc: denied { bpf } for pid=2570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:28.163000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd380df590 a2=70 a3=3 items=0 ppid=2423 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.163000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:46:28.167255 kubelet[1543]: E0209 19:46:28.167224 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:28.171000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:46:28.199000 audit[2589]: NETFILTER_CFG table=raw:85 family=2 entries=19 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:28.199000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7fffad2b2df0 a2=0 a3=7fffad2b2ddc items=0 ppid=2423 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.199000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:28.203000 audit[2592]: NETFILTER_CFG table=mangle:86 family=2 entries=19 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:28.203000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fffa45b8c00 a2=0 a3=7fffa45b8bec items=0 ppid=2423 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:28.205000 audit[2590]: NETFILTER_CFG table=nat:87 family=2 entries=16 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:28.205000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffcc86d9b30 a2=0 a3=55fc7ab83000 items=0 ppid=2423 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.205000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:28.206000 audit[2593]: NETFILTER_CFG table=filter:88 family=2 entries=39 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:28.206000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffd795cd7d0 a2=0 a3=564f27001000 items=0 ppid=2423 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:28.206000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:29.089106 systemd-networkd[1073]: vxlan.calico: Link UP Feb 9 19:46:29.089115 systemd-networkd[1073]: vxlan.calico: Gained carrier Feb 9 19:46:29.168218 kubelet[1543]: E0209 19:46:29.168155 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:30.168604 kubelet[1543]: E0209 19:46:30.168567 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:31.101833 systemd-networkd[1073]: vxlan.calico: Gained IPv6LL Feb 9 19:46:31.169726 kubelet[1543]: E0209 19:46:31.169694 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:32.170226 kubelet[1543]: E0209 19:46:32.170155 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:33.171304 kubelet[1543]: E0209 19:46:33.171253 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:33.277704 env[1193]: time="2024-02-09T19:46:33.277650486Z" level=info msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.314 [INFO][2629] k8s.go 578: Cleaning up netns ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.314 [INFO][2629] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" iface="eth0" netns="/var/run/netns/cni-9f422bd0-2395-5631-37da-9635f972b590" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.314 [INFO][2629] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" iface="eth0" netns="/var/run/netns/cni-9f422bd0-2395-5631-37da-9635f972b590" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.315 [INFO][2629] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" iface="eth0" netns="/var/run/netns/cni-9f422bd0-2395-5631-37da-9635f972b590" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.315 [INFO][2629] k8s.go 585: Releasing IP address(es) ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.315 [INFO][2629] utils.go 188: Calico CNI releasing IP address ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.332 [INFO][2637] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.332 [INFO][2637] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.332 [INFO][2637] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.340 [WARNING][2637] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.340 [INFO][2637] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.341 [INFO][2637] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:33.343623 env[1193]: 2024-02-09 19:46:33.342 [INFO][2629] k8s.go 591: Teardown processing complete. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:33.344102 env[1193]: time="2024-02-09T19:46:33.343772669Z" level=info msg="TearDown network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" successfully" Feb 9 19:46:33.344102 env[1193]: time="2024-02-09T19:46:33.343805100Z" level=info msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" returns successfully" Feb 9 19:46:33.344441 env[1193]: time="2024-02-09T19:46:33.344402476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r59lv,Uid:82172907-09c1-4046-9b3d-fe68160d689e,Namespace:calico-system,Attempt:1,}" Feb 9 19:46:33.345225 systemd[1]: run-netns-cni\x2d9f422bd0\x2d2395\x2d5631\x2d37da\x2d9635f972b590.mount: Deactivated successfully. Feb 9 19:46:33.436304 systemd-networkd[1073]: calid6c90e29474: Link UP Feb 9 19:46:33.437884 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:33.438127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid6c90e29474: link becomes ready Feb 9 19:46:33.438209 systemd-networkd[1073]: calid6c90e29474: Gained carrier Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.380 [INFO][2645] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-csi--node--driver--r59lv-eth0 csi-node-driver- calico-system 82172907-09c1-4046-9b3d-fe68160d689e 1046 0 2024-02-09 19:45:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.71 csi-node-driver-r59lv eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid6c90e29474 [] []}} ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.380 [INFO][2645] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.400 [INFO][2658] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" HandleID="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.411 [INFO][2658] ipam_plugin.go 268: Auto assigning IP ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" HandleID="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025db20), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.71", "pod":"csi-node-driver-r59lv", "timestamp":"2024-02-09 19:46:33.400667218 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.411 [INFO][2658] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.411 [INFO][2658] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.411 [INFO][2658] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.413 [INFO][2658] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.416 [INFO][2658] ipam.go 372: Looking up existing affinities for host host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.420 [INFO][2658] ipam.go 489: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.421 [INFO][2658] ipam.go 155: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.423 [INFO][2658] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.423 [INFO][2658] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.424 [INFO][2658] ipam.go 1682: Creating new handle: k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9 Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.427 [INFO][2658] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.431 [INFO][2658] ipam.go 1216: Successfully claimed IPs: [192.168.77.65/26] block=192.168.77.64/26 handle="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.431 [INFO][2658] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.65/26] handle="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" host="10.0.0.71" Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.431 [INFO][2658] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:33.446726 env[1193]: 2024-02-09 19:46:33.431 [INFO][2658] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.77.65/26] IPv6=[] ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" HandleID="k8s-pod-network.7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.434 [INFO][2645] k8s.go 385: Populated endpoint ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--r59lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82172907-09c1-4046-9b3d-fe68160d689e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"csi-node-driver-r59lv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid6c90e29474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.434 [INFO][2645] k8s.go 386: Calico CNI using IPs: [192.168.77.65/32] ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.434 [INFO][2645] dataplane_linux.go 68: Setting the host side veth name to calid6c90e29474 ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.438 [INFO][2645] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.438 [INFO][2645] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--r59lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82172907-09c1-4046-9b3d-fe68160d689e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9", Pod:"csi-node-driver-r59lv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid6c90e29474", MAC:"42:26:0a:3e:39:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:33.447512 env[1193]: 2024-02-09 19:46:33.445 [INFO][2645] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9" Namespace="calico-system" Pod="csi-node-driver-r59lv" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:33.458000 audit[2687]: NETFILTER_CFG table=filter:89 family=2 entries=36 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:33.459768 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 9 19:46:33.459818 kernel: audit: type=1325 audit(1707507993.458:271): table=filter:89 family=2 entries=36 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:33.458000 audit[2687]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff6b49ce80 a2=0 a3=7fff6b49ce6c items=0 ppid=2423 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:33.462286 env[1193]: time="2024-02-09T19:46:33.462220448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:33.462286 env[1193]: time="2024-02-09T19:46:33.462273709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:33.462390 env[1193]: time="2024-02-09T19:46:33.462289408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:33.462496 env[1193]: time="2024-02-09T19:46:33.462456103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9 pid=2695 runtime=io.containerd.runc.v2 Feb 9 19:46:33.468088 kernel: audit: type=1300 audit(1707507993.458:271): arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff6b49ce80 a2=0 a3=7fff6b49ce6c items=0 ppid=2423 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:33.468264 kernel: audit: type=1327 audit(1707507993.458:271): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:33.458000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:33.482855 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:33.492109 env[1193]: time="2024-02-09T19:46:33.492073833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r59lv,Uid:82172907-09c1-4046-9b3d-fe68160d689e,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9\"" Feb 9 19:46:33.493346 env[1193]: time="2024-02-09T19:46:33.493290567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:46:34.172101 kubelet[1543]: E0209 19:46:34.172006 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:34.877853 systemd-networkd[1073]: calid6c90e29474: Gained IPv6LL Feb 9 19:46:34.930302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518691940.mount: Deactivated successfully. Feb 9 19:46:35.172618 kubelet[1543]: E0209 19:46:35.172517 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:35.427452 env[1193]: time="2024-02-09T19:46:35.427315279Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:35.429992 env[1193]: time="2024-02-09T19:46:35.429955053Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:35.431752 env[1193]: time="2024-02-09T19:46:35.431711231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:35.443022 env[1193]: time="2024-02-09T19:46:35.442977795Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:35.443487 env[1193]: time="2024-02-09T19:46:35.443451598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:46:35.445073 env[1193]: time="2024-02-09T19:46:35.445036804Z" level=info msg="CreateContainer within sandbox \"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:46:35.894246 env[1193]: time="2024-02-09T19:46:35.894177337Z" level=info msg="CreateContainer within sandbox \"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"def9aacfbe9ec4c844bca21eedd37f4eb99249504c067217fe6efa17aa843d37\"" Feb 9 19:46:35.894827 env[1193]: time="2024-02-09T19:46:35.894791914Z" level=info msg="StartContainer for \"def9aacfbe9ec4c844bca21eedd37f4eb99249504c067217fe6efa17aa843d37\"" Feb 9 19:46:36.041256 env[1193]: time="2024-02-09T19:46:36.041202834Z" level=info msg="StartContainer for \"def9aacfbe9ec4c844bca21eedd37f4eb99249504c067217fe6efa17aa843d37\" returns successfully" Feb 9 19:46:36.042379 env[1193]: time="2024-02-09T19:46:36.042347581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:46:36.173544 kubelet[1543]: E0209 19:46:36.173409 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:37.174387 kubelet[1543]: E0209 19:46:37.174340 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:37.717020 env[1193]: time="2024-02-09T19:46:37.716949379Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:37.719101 env[1193]: time="2024-02-09T19:46:37.719042880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:37.720478 env[1193]: time="2024-02-09T19:46:37.720446203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:37.722037 env[1193]: time="2024-02-09T19:46:37.721993586Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:37.722470 env[1193]: time="2024-02-09T19:46:37.722439685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:46:37.724179 env[1193]: time="2024-02-09T19:46:37.724143333Z" level=info msg="CreateContainer within sandbox \"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:46:37.739592 env[1193]: time="2024-02-09T19:46:37.739546914Z" level=info msg="CreateContainer within sandbox \"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0f1a9da1bac2e03d8c619dceb1557e3ff686de491d742886c10570a9028f8195\"" Feb 9 19:46:37.740152 env[1193]: time="2024-02-09T19:46:37.740109254Z" level=info msg="StartContainer for \"0f1a9da1bac2e03d8c619dceb1557e3ff686de491d742886c10570a9028f8195\"" Feb 9 19:46:37.785626 env[1193]: time="2024-02-09T19:46:37.784626268Z" level=info msg="StartContainer for \"0f1a9da1bac2e03d8c619dceb1557e3ff686de491d742886c10570a9028f8195\" returns successfully" Feb 9 19:46:38.175440 kubelet[1543]: E0209 19:46:38.175382 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:38.314625 kubelet[1543]: I0209 19:46:38.314592 1543 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:46:38.314625 kubelet[1543]: I0209 19:46:38.314630 1543 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:46:38.388472 kubelet[1543]: I0209 19:46:38.388432 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-r59lv" podStartSLOduration=-9.223371994466372e+09 pod.CreationTimestamp="2024-02-09 19:45:56 +0000 UTC" firstStartedPulling="2024-02-09 19:46:33.49290482 +0000 UTC m=+50.502850886" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:38.388008856 +0000 UTC m=+55.397954922" watchObservedRunningTime="2024-02-09 19:46:38.38840414 +0000 UTC m=+55.398350206" Feb 9 19:46:39.175792 kubelet[1543]: E0209 19:46:39.175741 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:39.277822 env[1193]: time="2024-02-09T19:46:39.277781336Z" level=info msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] k8s.go 578: Cleaning up netns ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" iface="eth0" netns="/var/run/netns/cni-e56df1cd-c9a7-3806-010f-add598c2cca0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" iface="eth0" netns="/var/run/netns/cni-e56df1cd-c9a7-3806-010f-add598c2cca0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" iface="eth0" netns="/var/run/netns/cni-e56df1cd-c9a7-3806-010f-add598c2cca0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] k8s.go 585: Releasing IP address(es) ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.312 [INFO][2828] utils.go 188: Calico CNI releasing IP address ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.328 [INFO][2835] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.328 [INFO][2835] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.328 [INFO][2835] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.333 [WARNING][2835] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.333 [INFO][2835] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.334 [INFO][2835] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:39.336566 env[1193]: 2024-02-09 19:46:39.335 [INFO][2828] k8s.go 591: Teardown processing complete. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:39.338117 env[1193]: time="2024-02-09T19:46:39.336741128Z" level=info msg="TearDown network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" successfully" Feb 9 19:46:39.338117 env[1193]: time="2024-02-09T19:46:39.336768479Z" level=info msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" returns successfully" Feb 9 19:46:39.338305 systemd[1]: run-netns-cni\x2de56df1cd\x2dc9a7\x2d3806\x2d010f\x2dadd598c2cca0.mount: Deactivated successfully. Feb 9 19:46:39.338519 env[1193]: time="2024-02-09T19:46:39.338335829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9hsxq,Uid:c70a3502-9740-4aed-bb80-e14b379ad2bd,Namespace:default,Attempt:1,}" Feb 9 19:46:39.437403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:39.437565 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22349892e82: link becomes ready Feb 9 19:46:39.439533 systemd-networkd[1073]: cali22349892e82: Link UP Feb 9 19:46:39.439754 systemd-networkd[1073]: cali22349892e82: Gained carrier Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.375 [INFO][2842] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0 nginx-deployment-8ffc5cf85- default c70a3502-9740-4aed-bb80-e14b379ad2bd 1073 0 2024-02-09 19:46:24 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.71 nginx-deployment-8ffc5cf85-9hsxq eth0 default [] [] [kns.default ksa.default.default] cali22349892e82 [] []}} ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.376 [INFO][2842] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.403 [INFO][2856] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" HandleID="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.413 [INFO][2856] ipam_plugin.go 268: Auto assigning IP ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" HandleID="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000703880), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"nginx-deployment-8ffc5cf85-9hsxq", "timestamp":"2024-02-09 19:46:39.40349676 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.413 [INFO][2856] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.413 [INFO][2856] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.413 [INFO][2856] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.414 [INFO][2856] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.418 [INFO][2856] ipam.go 372: Looking up existing affinities for host host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.421 [INFO][2856] ipam.go 489: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.423 [INFO][2856] ipam.go 155: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.424 [INFO][2856] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.425 [INFO][2856] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.426 [INFO][2856] ipam.go 1682: Creating new handle: k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995 Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.429 [INFO][2856] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.432 [INFO][2856] ipam.go 1216: Successfully claimed IPs: [192.168.77.66/26] block=192.168.77.64/26 handle="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.432 [INFO][2856] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.66/26] handle="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" host="10.0.0.71" Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.432 [INFO][2856] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:39.444658 env[1193]: 2024-02-09 19:46:39.432 [INFO][2856] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.77.66/26] IPv6=[] ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" HandleID="k8s-pod-network.836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.433 [INFO][2842] k8s.go 385: Populated endpoint ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"c70a3502-9740-4aed-bb80-e14b379ad2bd", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-9hsxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali22349892e82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.434 [INFO][2842] k8s.go 386: Calico CNI using IPs: [192.168.77.66/32] ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.434 [INFO][2842] dataplane_linux.go 68: Setting the host side veth name to cali22349892e82 ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.437 [INFO][2842] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.437 [INFO][2842] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"c70a3502-9740-4aed-bb80-e14b379ad2bd", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995", Pod:"nginx-deployment-8ffc5cf85-9hsxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali22349892e82", MAC:"3a:c0:83:2d:27:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:39.445359 env[1193]: 2024-02-09 19:46:39.443 [INFO][2842] k8s.go 491: Wrote updated endpoint to datastore ContainerID="836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995" Namespace="default" Pod="nginx-deployment-8ffc5cf85-9hsxq" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:39.459000 audit[2882]: NETFILTER_CFG table=filter:90 family=2 entries=40 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:39.462377 env[1193]: time="2024-02-09T19:46:39.462323221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:39.462377 env[1193]: time="2024-02-09T19:46:39.462356042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:39.462377 env[1193]: time="2024-02-09T19:46:39.462366081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:39.462564 env[1193]: time="2024-02-09T19:46:39.462465138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995 pid=2888 runtime=io.containerd.runc.v2 Feb 9 19:46:39.459000 audit[2882]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fff253f45a0 a2=0 a3=7fff253f458c items=0 ppid=2423 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:39.466306 kernel: audit: type=1325 audit(1707507999.459:272): table=filter:90 family=2 entries=40 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:39.466360 kernel: audit: type=1300 audit(1707507999.459:272): arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fff253f45a0 a2=0 a3=7fff253f458c items=0 ppid=2423 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:39.459000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:39.473104 kernel: audit: type=1327 audit(1707507999.459:272): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:39.481666 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:39.504195 env[1193]: time="2024-02-09T19:46:39.504154871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9hsxq,Uid:c70a3502-9740-4aed-bb80-e14b379ad2bd,Namespace:default,Attempt:1,} returns sandbox id \"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995\"" Feb 9 19:46:39.505632 env[1193]: time="2024-02-09T19:46:39.505594709Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:46:40.176647 kubelet[1543]: E0209 19:46:40.176586 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:41.149845 systemd-networkd[1073]: cali22349892e82: Gained IPv6LL Feb 9 19:46:41.177763 kubelet[1543]: E0209 19:46:41.177675 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:42.178099 kubelet[1543]: E0209 19:46:42.178050 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:43.141467 kubelet[1543]: E0209 19:46:43.141420 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:43.144829 env[1193]: time="2024-02-09T19:46:43.144786795Z" level=info msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" Feb 9 19:46:43.179136 kubelet[1543]: E0209 19:46:43.179104 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.171 [WARNING][2936] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--r59lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82172907-09c1-4046-9b3d-fe68160d689e", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9", Pod:"csi-node-driver-r59lv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid6c90e29474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.171 [INFO][2936] k8s.go 578: Cleaning up netns ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.172 [INFO][2936] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" iface="eth0" netns="" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.172 [INFO][2936] k8s.go 585: Releasing IP address(es) ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.172 [INFO][2936] utils.go 188: Calico CNI releasing IP address ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.189 [INFO][2943] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.189 [INFO][2943] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.189 [INFO][2943] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.196 [WARNING][2943] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.196 [INFO][2943] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.197 [INFO][2943] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:43.199606 env[1193]: 2024-02-09 19:46:43.198 [INFO][2936] k8s.go 591: Teardown processing complete. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.200016 env[1193]: time="2024-02-09T19:46:43.199712600Z" level=info msg="TearDown network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" successfully" Feb 9 19:46:43.200016 env[1193]: time="2024-02-09T19:46:43.199751854Z" level=info msg="StopPodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" returns successfully" Feb 9 19:46:43.200267 env[1193]: time="2024-02-09T19:46:43.200234422Z" level=info msg="RemovePodSandbox for \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" Feb 9 19:46:43.200317 env[1193]: time="2024-02-09T19:46:43.200264969Z" level=info msg="Forcibly stopping sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\"" Feb 9 19:46:43.212415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134529504.mount: Deactivated successfully. Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.408 [WARNING][2966] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--r59lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82172907-09c1-4046-9b3d-fe68160d689e", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"7b6440b7c279034ef0543c585d52cf866a228789bc23dcb4220244994c884eb9", Pod:"csi-node-driver-r59lv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid6c90e29474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.408 [INFO][2966] k8s.go 578: Cleaning up netns ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.408 [INFO][2966] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" iface="eth0" netns="" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.408 [INFO][2966] k8s.go 585: Releasing IP address(es) ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.408 [INFO][2966] utils.go 188: Calico CNI releasing IP address ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.425 [INFO][2977] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.425 [INFO][2977] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.425 [INFO][2977] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.431 [WARNING][2977] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.431 [INFO][2977] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" HandleID="k8s-pod-network.8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Workload="10.0.0.71-k8s-csi--node--driver--r59lv-eth0" Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.432 [INFO][2977] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:43.434612 env[1193]: 2024-02-09 19:46:43.433 [INFO][2966] k8s.go 591: Teardown processing complete. ContainerID="8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d" Feb 9 19:46:43.435167 env[1193]: time="2024-02-09T19:46:43.435109118Z" level=info msg="TearDown network for sandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" successfully" Feb 9 19:46:43.437776 env[1193]: time="2024-02-09T19:46:43.437751497Z" level=info msg="RemovePodSandbox \"8f2b4c843cd50275aeaa7eeeefde2908b9f20f2e7232ef57f4703a213074128d\" returns successfully" Feb 9 19:46:43.438256 env[1193]: time="2024-02-09T19:46:43.438234876Z" level=info msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.465 [WARNING][2998] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"c70a3502-9740-4aed-bb80-e14b379ad2bd", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995", Pod:"nginx-deployment-8ffc5cf85-9hsxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali22349892e82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.466 [INFO][2998] k8s.go 578: Cleaning up netns ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.466 [INFO][2998] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" iface="eth0" netns="" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.466 [INFO][2998] k8s.go 585: Releasing IP address(es) ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.466 [INFO][2998] utils.go 188: Calico CNI releasing IP address ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.480 [INFO][3005] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.480 [INFO][3005] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.480 [INFO][3005] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.486 [WARNING][3005] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.486 [INFO][3005] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.487 [INFO][3005] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:43.490160 env[1193]: 2024-02-09 19:46:43.488 [INFO][2998] k8s.go 591: Teardown processing complete. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.490160 env[1193]: time="2024-02-09T19:46:43.490180618Z" level=info msg="TearDown network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" successfully" Feb 9 19:46:43.490160 env[1193]: time="2024-02-09T19:46:43.490213830Z" level=info msg="StopPodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" returns successfully" Feb 9 19:46:43.490797 env[1193]: time="2024-02-09T19:46:43.490606448Z" level=info msg="RemovePodSandbox for \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" Feb 9 19:46:43.490797 env[1193]: time="2024-02-09T19:46:43.490630734Z" level=info msg="Forcibly stopping sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\"" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.518 [WARNING][3027] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"c70a3502-9740-4aed-bb80-e14b379ad2bd", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995", Pod:"nginx-deployment-8ffc5cf85-9hsxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali22349892e82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.518 [INFO][3027] k8s.go 578: Cleaning up netns ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.518 [INFO][3027] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" iface="eth0" netns="" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.518 [INFO][3027] k8s.go 585: Releasing IP address(es) ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.518 [INFO][3027] utils.go 188: Calico CNI releasing IP address ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.534 [INFO][3035] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.534 [INFO][3035] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.534 [INFO][3035] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.541 [WARNING][3035] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.541 [INFO][3035] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" HandleID="k8s-pod-network.1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Workload="10.0.0.71-k8s-nginx--deployment--8ffc5cf85--9hsxq-eth0" Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.543 [INFO][3035] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:43.544981 env[1193]: 2024-02-09 19:46:43.544 [INFO][3027] k8s.go 591: Teardown processing complete. ContainerID="1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf" Feb 9 19:46:43.545430 env[1193]: time="2024-02-09T19:46:43.545016874Z" level=info msg="TearDown network for sandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" successfully" Feb 9 19:46:43.547927 env[1193]: time="2024-02-09T19:46:43.547891629Z" level=info msg="RemovePodSandbox \"1ecc1b4cd00ae70102d3a9c610edefcce6fe59368b4e686a0be29d4b947bbccf\" returns successfully" Feb 9 19:46:44.179858 kubelet[1543]: E0209 19:46:44.179821 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:44.374125 env[1193]: time="2024-02-09T19:46:44.374080683Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:44.375771 env[1193]: time="2024-02-09T19:46:44.375746534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:44.377295 env[1193]: time="2024-02-09T19:46:44.377272634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:44.378599 env[1193]: time="2024-02-09T19:46:44.378577728Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:44.379127 env[1193]: time="2024-02-09T19:46:44.379100490Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:46:44.380339 env[1193]: time="2024-02-09T19:46:44.380308421Z" level=info msg="CreateContainer within sandbox \"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:46:44.388322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506855259.mount: Deactivated successfully. Feb 9 19:46:44.389557 env[1193]: time="2024-02-09T19:46:44.389529025Z" level=info msg="CreateContainer within sandbox \"836e30f59ddd3994d5db918f157ec469c047c8f3ab361b06bb7ca892cb3ae995\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a566543ea2b7793ecbcce95fccfbff603db20ffff8cec6e560167cc77504cb67\"" Feb 9 19:46:44.389831 env[1193]: time="2024-02-09T19:46:44.389808580Z" level=info msg="StartContainer for \"a566543ea2b7793ecbcce95fccfbff603db20ffff8cec6e560167cc77504cb67\"" Feb 9 19:46:44.512503 env[1193]: time="2024-02-09T19:46:44.512385608Z" level=info msg="StartContainer for \"a566543ea2b7793ecbcce95fccfbff603db20ffff8cec6e560167cc77504cb67\" returns successfully" Feb 9 19:46:45.180360 kubelet[1543]: E0209 19:46:45.180326 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:45.402589 kubelet[1543]: I0209 19:46:45.402550 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-9hsxq" podStartSLOduration=-9.22337201545226e+09 pod.CreationTimestamp="2024-02-09 19:46:24 +0000 UTC" firstStartedPulling="2024-02-09 19:46:39.505312369 +0000 UTC m=+56.515258435" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:45.40242841 +0000 UTC m=+62.412374476" watchObservedRunningTime="2024-02-09 19:46:45.402516045 +0000 UTC m=+62.412462112" Feb 9 19:46:46.180774 kubelet[1543]: E0209 19:46:46.180729 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:47.181062 kubelet[1543]: E0209 19:46:47.181010 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:48.181819 kubelet[1543]: E0209 19:46:48.181761 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:49.182311 kubelet[1543]: E0209 19:46:49.182248 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:50.183310 kubelet[1543]: E0209 19:46:50.183251 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:51.183806 kubelet[1543]: E0209 19:46:51.183765 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:51.849000 audit[3133]: NETFILTER_CFG table=filter:91 family=2 entries=18 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.849000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffeccea55c0 a2=0 a3=7ffeccea55ac items=0 ppid=1748 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.855362 kernel: audit: type=1325 audit(1707508011.849:273): table=filter:91 family=2 entries=18 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.855441 kernel: audit: type=1300 audit(1707508011.849:273): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffeccea55c0 a2=0 a3=7ffeccea55ac items=0 ppid=1748 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.855469 kernel: audit: type=1327 audit(1707508011.849:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.849000 audit[3133]: NETFILTER_CFG table=nat:92 family=2 entries=78 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.849000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeccea55c0 a2=0 a3=7ffeccea55ac items=0 ppid=1748 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.867897 kernel: audit: type=1325 audit(1707508011.849:274): table=nat:92 family=2 entries=78 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.867945 kernel: audit: type=1300 audit(1707508011.849:274): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeccea55c0 a2=0 a3=7ffeccea55ac items=0 ppid=1748 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.867970 kernel: audit: type=1327 audit(1707508011.849:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.889000 audit[3159]: NETFILTER_CFG table=filter:93 family=2 entries=30 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.889000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff989da040 a2=0 a3=7fff989da02c items=0 ppid=1748 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.920079 kernel: audit: type=1325 audit(1707508011.889:275): table=filter:93 family=2 entries=30 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.920119 kernel: audit: type=1300 audit(1707508011.889:275): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff989da040 a2=0 a3=7fff989da02c items=0 ppid=1748 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.920137 kernel: audit: type=1327 audit(1707508011.889:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.890000 audit[3159]: NETFILTER_CFG table=nat:94 family=2 entries=78 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.890000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff989da040 a2=0 a3=7fff989da02c items=0 ppid=1748 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.924700 kernel: audit: type=1325 audit(1707508011.890:276): table=nat:94 family=2 entries=78 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:52.027870 kubelet[1543]: I0209 19:46:52.027817 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:52.184625 kubelet[1543]: E0209 19:46:52.184518 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:52.219761 kubelet[1543]: I0209 19:46:52.219735 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtlt\" (UniqueName: \"kubernetes.io/projected/5c20239a-6b79-4bac-9cb8-0f401c97faba-kube-api-access-2mtlt\") pod \"nfs-server-provisioner-0\" (UID: \"5c20239a-6b79-4bac-9cb8-0f401c97faba\") " pod="default/nfs-server-provisioner-0" Feb 9 19:46:52.219882 kubelet[1543]: I0209 19:46:52.219860 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5c20239a-6b79-4bac-9cb8-0f401c97faba-data\") pod \"nfs-server-provisioner-0\" (UID: \"5c20239a-6b79-4bac-9cb8-0f401c97faba\") " pod="default/nfs-server-provisioner-0" Feb 9 19:46:52.331129 env[1193]: time="2024-02-09T19:46:52.331092990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c20239a-6b79-4bac-9cb8-0f401c97faba,Namespace:default,Attempt:0,}" Feb 9 19:46:52.408996 systemd-networkd[1073]: cali60e51b789ff: Link UP Feb 9 19:46:52.410419 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:52.410524 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 19:46:52.410597 systemd-networkd[1073]: cali60e51b789ff: Gained carrier Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.363 [INFO][3162] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5c20239a-6b79-4bac-9cb8-0f401c97faba 1131 0 2024-02-09 19:46:51 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.71 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.363 [INFO][3162] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.381 [INFO][3175] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" HandleID="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.390 [INFO][3175] ipam_plugin.go 268: Auto assigning IP ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" HandleID="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027b910), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 19:46:52.381893812 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.390 [INFO][3175] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.390 [INFO][3175] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.390 [INFO][3175] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.391 [INFO][3175] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.394 [INFO][3175] ipam.go 372: Looking up existing affinities for host host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.397 [INFO][3175] ipam.go 489: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.398 [INFO][3175] ipam.go 155: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.400 [INFO][3175] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.400 [INFO][3175] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.401 [INFO][3175] ipam.go 1682: Creating new handle: k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.403 [INFO][3175] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.406 [INFO][3175] ipam.go 1216: Successfully claimed IPs: [192.168.77.67/26] block=192.168.77.64/26 handle="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.406 [INFO][3175] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.67/26] handle="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" host="10.0.0.71" Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.406 [INFO][3175] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:52.417320 env[1193]: 2024-02-09 19:46:52.406 [INFO][3175] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.77.67/26] IPv6=[] ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" HandleID="k8s-pod-network.5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.418016 env[1193]: 2024-02-09 19:46:52.407 [INFO][3162] k8s.go 385: Populated endpoint ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c20239a-6b79-4bac-9cb8-0f401c97faba", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:52.418016 env[1193]: 2024-02-09 19:46:52.407 [INFO][3162] k8s.go 386: Calico CNI using IPs: [192.168.77.67/32] ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.418016 env[1193]: 2024-02-09 19:46:52.407 [INFO][3162] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.418016 env[1193]: 2024-02-09 19:46:52.410 [INFO][3162] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.418195 env[1193]: 2024-02-09 19:46:52.411 [INFO][3162] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c20239a-6b79-4bac-9cb8-0f401c97faba", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e2:e7:5f:38:73:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:52.418195 env[1193]: 2024-02-09 19:46:52.416 [INFO][3162] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:46:52.426000 audit[3206]: NETFILTER_CFG table=filter:95 family=2 entries=38 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:52.426000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=19500 a0=3 a1=7ffe9f73bab0 a2=0 a3=7ffe9f73ba9c items=0 ppid=2423 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:52.426000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:52.428129 env[1193]: time="2024-02-09T19:46:52.428072462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:52.428129 env[1193]: time="2024-02-09T19:46:52.428113490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:52.428129 env[1193]: time="2024-02-09T19:46:52.428124751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:52.428340 env[1193]: time="2024-02-09T19:46:52.428266197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc pid=3211 runtime=io.containerd.runc.v2 Feb 9 19:46:52.448728 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:52.470947 env[1193]: time="2024-02-09T19:46:52.470913576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c20239a-6b79-4bac-9cb8-0f401c97faba,Namespace:default,Attempt:0,} returns sandbox id \"5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc\"" Feb 9 19:46:52.472179 env[1193]: time="2024-02-09T19:46:52.472146231Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:46:53.184935 kubelet[1543]: E0209 19:46:53.184871 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:53.437882 systemd-networkd[1073]: cali60e51b789ff: Gained IPv6LL Feb 9 19:46:54.185878 kubelet[1543]: E0209 19:46:54.185838 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:55.186195 kubelet[1543]: E0209 19:46:55.186140 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:55.271991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002966567.mount: Deactivated successfully. Feb 9 19:46:56.186537 kubelet[1543]: E0209 19:46:56.186481 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:57.187272 kubelet[1543]: E0209 19:46:57.187221 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:57.314869 systemd[1]: run-containerd-runc-k8s.io-19851769a81de66f201e7426dac15bf1b3f74fd399ae5a00447e65f429833759-runc.arEMEY.mount: Deactivated successfully. Feb 9 19:46:57.356088 kubelet[1543]: E0209 19:46:57.356065 1543 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:58.187845 kubelet[1543]: E0209 19:46:58.187795 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:59.187940 kubelet[1543]: E0209 19:46:59.187893 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:46:59.222947 env[1193]: time="2024-02-09T19:46:59.222899640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:59.254262 env[1193]: time="2024-02-09T19:46:59.254200768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:59.256613 env[1193]: time="2024-02-09T19:46:59.256559956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:59.258299 env[1193]: time="2024-02-09T19:46:59.258272209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:59.258952 env[1193]: time="2024-02-09T19:46:59.258922220Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:46:59.260836 env[1193]: time="2024-02-09T19:46:59.260805885Z" level=info msg="CreateContainer within sandbox \"5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:46:59.286361 env[1193]: time="2024-02-09T19:46:59.286315253Z" level=info msg="CreateContainer within sandbox \"5f0abfde6d69707612ba067748d7363868bc1a704804920b856284ad644659dc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"65460b76e952f095890517330b424c5d559f7d7bb2e79131d8b9e323a57cdc4c\"" Feb 9 19:46:59.286702 env[1193]: time="2024-02-09T19:46:59.286663818Z" level=info msg="StartContainer for \"65460b76e952f095890517330b424c5d559f7d7bb2e79131d8b9e323a57cdc4c\"" Feb 9 19:46:59.322539 env[1193]: time="2024-02-09T19:46:59.322490680Z" level=info msg="StartContainer for \"65460b76e952f095890517330b424c5d559f7d7bb2e79131d8b9e323a57cdc4c\" returns successfully" Feb 9 19:46:59.459000 audit[3353]: NETFILTER_CFG table=filter:96 family=2 entries=18 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.461215 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 19:46:59.461275 kernel: audit: type=1325 audit(1707508019.459:278): table=filter:96 family=2 entries=18 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.459000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff1816a6e0 a2=0 a3=7fff1816a6cc items=0 ppid=1748 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.465866 kernel: audit: type=1300 audit(1707508019.459:278): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff1816a6e0 a2=0 a3=7fff1816a6cc items=0 ppid=1748 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.465908 kernel: audit: type=1327 audit(1707508019.459:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:59.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:59.462000 audit[3353]: NETFILTER_CFG table=nat:97 family=2 entries=162 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.462000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff1816a6e0 a2=0 a3=7fff1816a6cc items=0 ppid=1748 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.477449 kernel: audit: type=1325 audit(1707508019.462:279): table=nat:97 family=2 entries=162 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.477498 kernel: audit: type=1300 audit(1707508019.462:279): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff1816a6e0 a2=0 a3=7fff1816a6cc items=0 ppid=1748 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.477519 kernel: audit: type=1327 audit(1707508019.462:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:59.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.188416 kubelet[1543]: E0209 19:47:00.188381 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:00.336817 kubelet[1543]: I0209 19:47:00.336787 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027518036e+09 pod.CreationTimestamp="2024-02-09 19:46:51 +0000 UTC" firstStartedPulling="2024-02-09 19:46:52.471966564 +0000 UTC m=+69.481912630" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:59.426633012 +0000 UTC m=+76.436579078" watchObservedRunningTime="2024-02-09 19:47:00.336740401 +0000 UTC m=+77.346686467" Feb 9 19:47:00.337059 kubelet[1543]: I0209 19:47:00.337041 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:47:00.351000 audit[3380]: NETFILTER_CFG table=filter:98 family=2 entries=7 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.351000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff92e48350 a2=0 a3=7fff92e4833c items=0 ppid=1748 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:00.357611 kernel: audit: type=1325 audit(1707508020.351:280): table=filter:98 family=2 entries=7 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.357690 kernel: audit: type=1300 audit(1707508020.351:280): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff92e48350 a2=0 a3=7fff92e4833c items=0 ppid=1748 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:00.357720 kernel: audit: type=1327 audit(1707508020.351:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.355000 audit[3380]: NETFILTER_CFG table=nat:99 family=2 entries=198 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.355000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff92e48350 a2=0 a3=7fff92e4833c items=0 ppid=1748 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:00.366768 kernel: audit: type=1325 audit(1707508020.355:281): table=nat:99 family=2 entries=198 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.454935 kubelet[1543]: I0209 19:47:00.454830 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q48f\" (UniqueName: \"kubernetes.io/projected/e49c9d74-7607-4964-acc2-1cad3c06f09b-kube-api-access-6q48f\") pod \"calico-apiserver-7777497956-xwcrw\" (UID: \"e49c9d74-7607-4964-acc2-1cad3c06f09b\") " pod="calico-apiserver/calico-apiserver-7777497956-xwcrw" Feb 9 19:47:00.454935 kubelet[1543]: I0209 19:47:00.454882 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e49c9d74-7607-4964-acc2-1cad3c06f09b-calico-apiserver-certs\") pod \"calico-apiserver-7777497956-xwcrw\" (UID: \"e49c9d74-7607-4964-acc2-1cad3c06f09b\") " pod="calico-apiserver/calico-apiserver-7777497956-xwcrw" Feb 9 19:47:00.556751 kubelet[1543]: E0209 19:47:00.556706 1543 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:47:00.556935 kubelet[1543]: E0209 19:47:00.556807 1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49c9d74-7607-4964-acc2-1cad3c06f09b-calico-apiserver-certs podName:e49c9d74-7607-4964-acc2-1cad3c06f09b nodeName:}" failed. No retries permitted until 2024-02-09 19:47:01.05677836 +0000 UTC m=+78.066724426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e49c9d74-7607-4964-acc2-1cad3c06f09b-calico-apiserver-certs") pod "calico-apiserver-7777497956-xwcrw" (UID: "e49c9d74-7607-4964-acc2-1cad3c06f09b") : secret "calico-apiserver-certs" not found Feb 9 19:47:01.188755 kubelet[1543]: E0209 19:47:01.188696 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:01.240561 env[1193]: time="2024-02-09T19:47:01.240509551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7777497956-xwcrw,Uid:e49c9d74-7607-4964-acc2-1cad3c06f09b,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:47:01.393000 audit[3408]: NETFILTER_CFG table=filter:100 family=2 entries=8 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:01.393000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe967a2d00 a2=0 a3=7ffe967a2cec items=0 ppid=1748 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:01.396000 audit[3408]: NETFILTER_CFG table=nat:101 family=2 entries=198 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:01.396000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe967a2d00 a2=0 a3=7ffe967a2cec items=0 ppid=1748 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:01.511955 systemd-networkd[1073]: cali9704c0cd49c: Link UP Feb 9 19:47:01.513961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:47:01.514014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9704c0cd49c: link becomes ready Feb 9 19:47:01.513850 systemd-networkd[1073]: cali9704c0cd49c: Gained carrier Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.444 [INFO][3414] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0 calico-apiserver-7777497956- calico-apiserver e49c9d74-7607-4964-acc2-1cad3c06f09b 1213 0 2024-02-09 19:47:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7777497956 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.71 calico-apiserver-7777497956-xwcrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9704c0cd49c [] []}} ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.444 [INFO][3414] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.469 [INFO][3423] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" HandleID="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Workload="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.479 [INFO][3423] ipam_plugin.go 268: Auto assigning IP ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" HandleID="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Workload="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029f940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.71", "pod":"calico-apiserver-7777497956-xwcrw", "timestamp":"2024-02-09 19:47:01.469572281 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.479 [INFO][3423] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.479 [INFO][3423] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.479 [INFO][3423] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.480 [INFO][3423] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.485 [INFO][3423] ipam.go 372: Looking up existing affinities for host host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.495 [INFO][3423] ipam.go 489: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.496 [INFO][3423] ipam.go 155: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.500 [INFO][3423] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.500 [INFO][3423] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.501 [INFO][3423] ipam.go 1682: Creating new handle: k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0 Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.504 [INFO][3423] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.508 [INFO][3423] ipam.go 1216: Successfully claimed IPs: [192.168.77.68/26] block=192.168.77.64/26 handle="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.508 [INFO][3423] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.68/26] handle="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" host="10.0.0.71" Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.508 [INFO][3423] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:47:01.523317 env[1193]: 2024-02-09 19:47:01.508 [INFO][3423] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.77.68/26] IPv6=[] ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" HandleID="k8s-pod-network.8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Workload="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.510 [INFO][3414] k8s.go 385: Populated endpoint ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0", GenerateName:"calico-apiserver-7777497956-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49c9d74-7607-4964-acc2-1cad3c06f09b", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7777497956", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"calico-apiserver-7777497956-xwcrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9704c0cd49c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.510 [INFO][3414] k8s.go 386: Calico CNI using IPs: [192.168.77.68/32] ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.510 [INFO][3414] dataplane_linux.go 68: Setting the host side veth name to cali9704c0cd49c ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.514 [INFO][3414] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.514 [INFO][3414] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0", GenerateName:"calico-apiserver-7777497956-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49c9d74-7607-4964-acc2-1cad3c06f09b", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7777497956", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0", Pod:"calico-apiserver-7777497956-xwcrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9704c0cd49c", MAC:"3e:1f:03:97:85:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:01.524007 env[1193]: 2024-02-09 19:47:01.522 [INFO][3414] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-xwcrw" WorkloadEndpoint="10.0.0.71-k8s-calico--apiserver--7777497956--xwcrw-eth0" Feb 9 19:47:01.538000 audit[3450]: NETFILTER_CFG table=filter:102 family=2 entries=55 op=nft_register_chain pid=3450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:47:01.538000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=28104 a0=3 a1=7fff10a615c0 a2=0 a3=7fff10a615ac items=0 ppid=2423 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.538000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:47:01.831462 env[1193]: time="2024-02-09T19:47:01.831387852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:47:01.831462 env[1193]: time="2024-02-09T19:47:01.831420543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:47:01.831462 env[1193]: time="2024-02-09T19:47:01.831429981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:47:01.831722 env[1193]: time="2024-02-09T19:47:01.831522805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0 pid=3458 runtime=io.containerd.runc.v2 Feb 9 19:47:01.851775 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:47:01.875051 env[1193]: time="2024-02-09T19:47:01.874989517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7777497956-xwcrw,Uid:e49c9d74-7607-4964-acc2-1cad3c06f09b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0\"" Feb 9 19:47:01.876246 env[1193]: time="2024-02-09T19:47:01.876198506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:47:02.189908 kubelet[1543]: E0209 19:47:02.189777 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:03.037890 systemd-networkd[1073]: cali9704c0cd49c: Gained IPv6LL Feb 9 19:47:03.142306 kubelet[1543]: E0209 19:47:03.142265 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:03.190694 kubelet[1543]: E0209 19:47:03.190653 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:04.190817 kubelet[1543]: E0209 19:47:04.190774 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:05.191830 kubelet[1543]: E0209 19:47:05.191773 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:06.192488 kubelet[1543]: E0209 19:47:06.192436 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:06.574842 env[1193]: time="2024-02-09T19:47:06.574785523Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.576860 env[1193]: time="2024-02-09T19:47:06.576802196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.578779 env[1193]: time="2024-02-09T19:47:06.578742678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.580490 env[1193]: time="2024-02-09T19:47:06.580434422Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.581196 env[1193]: time="2024-02-09T19:47:06.581153271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:47:06.583205 env[1193]: time="2024-02-09T19:47:06.583157951Z" level=info msg="CreateContainer within sandbox \"8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:47:06.593016 env[1193]: time="2024-02-09T19:47:06.592968899Z" level=info msg="CreateContainer within sandbox \"8f1746735b291f4c55bed92b3cd749847a4b9146117e3ad47c2af68b28960cc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"113452bc155bd46fb56963ce43b07b79258ec3f5019c6c4915609399823d28c0\"" Feb 9 19:47:06.593452 env[1193]: time="2024-02-09T19:47:06.593414845Z" level=info msg="StartContainer for \"113452bc155bd46fb56963ce43b07b79258ec3f5019c6c4915609399823d28c0\"" Feb 9 19:47:06.651211 env[1193]: time="2024-02-09T19:47:06.651153163Z" level=info msg="StartContainer for \"113452bc155bd46fb56963ce43b07b79258ec3f5019c6c4915609399823d28c0\" returns successfully" Feb 9 19:47:07.193200 kubelet[1543]: E0209 19:47:07.193134 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:07.341000 audit[3559]: NETFILTER_CFG table=filter:103 family=2 entries=8 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.343152 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 19:47:07.343183 kernel: audit: type=1325 audit(1707508027.341:285): table=filter:103 family=2 entries=8 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.341000 audit[3559]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc94d1b350 a2=0 a3=7ffc94d1b33c items=0 ppid=1748 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.347906 kernel: audit: type=1300 audit(1707508027.341:285): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc94d1b350 a2=0 a3=7ffc94d1b33c items=0 ppid=1748 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.347955 kernel: audit: type=1327 audit(1707508027.341:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.344000 audit[3559]: NETFILTER_CFG table=nat:104 family=2 entries=198 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.344000 audit[3559]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc94d1b350 a2=0 a3=7ffc94d1b33c items=0 ppid=1748 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.364327 kernel: audit: type=1325 audit(1707508027.344:286): table=nat:104 family=2 entries=198 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.364364 kernel: audit: type=1300 audit(1707508027.344:286): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc94d1b350 a2=0 a3=7ffc94d1b33c items=0 ppid=1748 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.364390 kernel: audit: type=1327 audit(1707508027.344:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.440623 kubelet[1543]: I0209 19:47:07.440582 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7777497956-xwcrw" podStartSLOduration=-9.223372029414246e+09 pod.CreationTimestamp="2024-02-09 19:47:00 +0000 UTC" firstStartedPulling="2024-02-09 19:47:01.876026553 +0000 UTC m=+78.885972620" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:47:07.440459632 +0000 UTC m=+84.450405718" watchObservedRunningTime="2024-02-09 19:47:07.440530986 +0000 UTC m=+84.450477052" Feb 9 19:47:07.471000 audit[3585]: NETFILTER_CFG table=filter:105 family=2 entries=8 op=nft_register_rule pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.471000 audit[3585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd001fe110 a2=0 a3=7ffd001fe0fc items=0 ppid=1748 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.478636 kernel: audit: type=1325 audit(1707508027.471:287): table=filter:105 family=2 entries=8 op=nft_register_rule pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.478797 kernel: audit: type=1300 audit(1707508027.471:287): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd001fe110 a2=0 a3=7ffd001fe0fc items=0 ppid=1748 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.478837 kernel: audit: type=1327 audit(1707508027.471:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.471000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.474000 audit[3585]: NETFILTER_CFG table=nat:106 family=2 entries=198 op=nft_register_rule pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.474000 audit[3585]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd001fe110 a2=0 a3=7ffd001fe0fc items=0 ppid=1748 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.485702 kernel: audit: type=1325 audit(1707508027.474:288): table=nat:106 family=2 entries=198 op=nft_register_rule pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:08.193580 kubelet[1543]: E0209 19:47:08.193533 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:09.193801 kubelet[1543]: E0209 19:47:09.193663 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:09.285587 kubelet[1543]: I0209 19:47:09.285547 1543 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:47:09.401638 kubelet[1543]: I0209 19:47:09.401594 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eba1ec8b-4537-46ee-8fe4-a155b4f9010c\" (UniqueName: \"kubernetes.io/nfs/166aa272-29ae-44d6-80d6-6805403d5b77-pvc-eba1ec8b-4537-46ee-8fe4-a155b4f9010c\") pod \"test-pod-1\" (UID: \"166aa272-29ae-44d6-80d6-6805403d5b77\") " pod="default/test-pod-1" Feb 9 19:47:09.401638 kubelet[1543]: I0209 19:47:09.401661 1543 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcf4j\" (UniqueName: \"kubernetes.io/projected/166aa272-29ae-44d6-80d6-6805403d5b77-kube-api-access-fcf4j\") pod \"test-pod-1\" (UID: \"166aa272-29ae-44d6-80d6-6805403d5b77\") " pod="default/test-pod-1" Feb 9 19:47:09.510000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.510000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.510000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.514050 kernel: Failed to create system directory netfs Feb 9 19:47:09.514102 kernel: Failed to create system directory netfs Feb 9 19:47:09.514119 kernel: Failed to create system directory netfs Feb 9 19:47:09.514132 kernel: Failed to create system directory netfs Feb 9 19:47:09.510000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.510000 audit[3596]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55616c04e5e0 a1=153bc a2=55616bdd02b0 a3=5 items=0 ppid=68 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.510000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.522747 kernel: Failed to create system directory fscache Feb 9 19:47:09.522801 kernel: Failed to create system directory fscache Feb 9 19:47:09.522825 kernel: Failed to create system directory fscache Feb 9 19:47:09.522851 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.523775 kernel: Failed to create system directory fscache Feb 9 19:47:09.523801 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.524802 kernel: Failed to create system directory fscache Feb 9 19:47:09.524845 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.525843 kernel: Failed to create system directory fscache Feb 9 19:47:09.525873 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.526866 kernel: Failed to create system directory fscache Feb 9 19:47:09.526911 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.527844 kernel: Failed to create system directory fscache Feb 9 19:47:09.527870 kernel: Failed to create system directory fscache Feb 9 19:47:09.518000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.518000 audit[3596]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55616c2639c0 a1=4c0fc a2=55616bdd02b0 a3=5 items=0 ppid=68 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.518000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:47:09.530713 kernel: FS-Cache: Loaded Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.556127 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.556200 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.556230 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.556249 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.557721 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.557750 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.557764 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.558723 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.558743 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.559734 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.559758 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.560724 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.560743 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.561709 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.561735 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.562701 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.562726 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.563782 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.563813 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.565151 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.565175 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.565190 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.566136 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.566158 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.567213 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.567876 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.567931 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.568926 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.568974 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.569961 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.570009 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.571004 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.571050 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.572042 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.572088 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.573101 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.573152 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.574171 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.574210 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.575753 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.575782 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.575810 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.576820 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.576856 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.577908 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.577944 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.578997 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.579035 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.580045 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.580080 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.581098 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.581127 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.582147 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.582175 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.583189 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.583217 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.584770 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.584800 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.584820 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.585813 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.585842 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.586884 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.586926 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.588044 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.588081 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.589093 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.589122 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.590129 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.590167 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.591807 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.591842 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.591862 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.592848 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.592883 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.593889 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.593917 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.594930 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.594959 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.595997 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.596035 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.597079 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.597116 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.598140 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.598175 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.598698 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.599738 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.599773 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.600807 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.600842 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.601870 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.601899 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.602920 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.602959 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.603964 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.603993 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.605008 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.605037 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.606056 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.606095 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.607147 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.607186 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.607698 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.608725 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.608753 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.609771 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.609800 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.610817 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.610846 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.611873 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.611903 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.612938 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.612967 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.614002 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.614029 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.615062 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.615091 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.616118 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.616153 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.617191 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.617219 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.618830 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.618862 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.618882 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.619870 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.619899 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.545000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.620919 kernel: Failed to create system directory sunrpc Feb 9 19:47:09.628761 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:47:09.628810 kernel: RPC: Registered udp transport module. Feb 9 19:47:09.628832 kernel: RPC: Registered tcp transport module. Feb 9 19:47:09.629874 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:47:09.545000 audit[3596]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55616c2afad0 a1=1588c4 a2=55616bdd02b0 a3=5 items=6 ppid=68 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.545000 audit: CWD cwd="/" Feb 9 19:47:09.545000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PATH item=1 name=(null) inode=24401 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PATH item=2 name=(null) inode=24401 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PATH item=3 name=(null) inode=24402 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PATH item=4 name=(null) inode=24401 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PATH item=5 name=(null) inode=24403 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:47:09.545000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.654879 kernel: Failed to create system directory nfs Feb 9 19:47:09.654922 kernel: Failed to create system directory nfs Feb 9 19:47:09.654940 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.655819 kernel: Failed to create system directory nfs Feb 9 19:47:09.655838 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.656768 kernel: Failed to create system directory nfs Feb 9 19:47:09.656804 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.657724 kernel: Failed to create system directory nfs Feb 9 19:47:09.657747 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.659119 kernel: Failed to create system directory nfs Feb 9 19:47:09.659137 kernel: Failed to create system directory nfs Feb 9 19:47:09.659150 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.660061 kernel: Failed to create system directory nfs Feb 9 19:47:09.660088 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.661004 kernel: Failed to create system directory nfs Feb 9 19:47:09.661025 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.661959 kernel: Failed to create system directory nfs Feb 9 19:47:09.661985 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.662920 kernel: Failed to create system directory nfs Feb 9 19:47:09.662948 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.663889 kernel: Failed to create system directory nfs Feb 9 19:47:09.663915 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.664849 kernel: Failed to create system directory nfs Feb 9 19:47:09.664876 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.665829 kernel: Failed to create system directory nfs Feb 9 19:47:09.665857 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.666762 kernel: Failed to create system directory nfs Feb 9 19:47:09.666797 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.667697 kernel: Failed to create system directory nfs Feb 9 19:47:09.667724 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.669071 kernel: Failed to create system directory nfs Feb 9 19:47:09.669099 kernel: Failed to create system directory nfs Feb 9 19:47:09.669115 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.669996 kernel: Failed to create system directory nfs Feb 9 19:47:09.670016 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.670920 kernel: Failed to create system directory nfs Feb 9 19:47:09.670946 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.671850 kernel: Failed to create system directory nfs Feb 9 19:47:09.671871 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.672799 kernel: Failed to create system directory nfs Feb 9 19:47:09.672827 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.673740 kernel: Failed to create system directory nfs Feb 9 19:47:09.673758 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.674692 kernel: Failed to create system directory nfs Feb 9 19:47:09.674708 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.676134 kernel: Failed to create system directory nfs Feb 9 19:47:09.676150 kernel: Failed to create system directory nfs Feb 9 19:47:09.676163 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.677074 kernel: Failed to create system directory nfs Feb 9 19:47:09.677105 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.677995 kernel: Failed to create system directory nfs Feb 9 19:47:09.678018 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.678922 kernel: Failed to create system directory nfs Feb 9 19:47:09.678940 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.679851 kernel: Failed to create system directory nfs Feb 9 19:47:09.679868 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.680777 kernel: Failed to create system directory nfs Feb 9 19:47:09.680795 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.648000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.681702 kernel: Failed to create system directory nfs Feb 9 19:47:09.648000 audit[3596]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55616c452680 a1=e29dc a2=55616bdd02b0 a3=5 items=0 ppid=68 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.648000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:47:09.694707 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.727831 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.727883 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.727907 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.727927 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.728892 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.728930 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.729911 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.729937 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.730932 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.730967 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.732079 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.732116 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.733104 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.733132 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.734120 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.734142 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.735191 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.735227 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.735727 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.736731 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.736784 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.738008 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.738125 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.739244 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.739282 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.740852 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.740890 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.740913 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.741915 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.741951 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.743097 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.743143 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.744173 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.744225 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.745196 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.745227 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.746725 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.746761 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.746788 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.747750 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.747789 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.748742 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.748774 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.749741 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.749774 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.750761 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.750799 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.751765 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.751789 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.752780 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.752823 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.753914 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.753971 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.755210 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.755248 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.756835 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.756872 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.756899 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.757863 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.757896 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.758882 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.758915 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.759899 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.759926 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.761147 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.761199 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.762177 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.762210 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.763174 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.763203 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.764416 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.764503 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.766698 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.767904 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.767945 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.769807 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.769864 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.771525 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.771586 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.773007 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.773037 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.774112 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.774141 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.775126 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.775153 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.776147 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.776190 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.777192 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.777228 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.778713 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.778747 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.778767 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.779806 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.779855 kernel: Failed to create system directory nfs4 Feb 9 19:47:09.716000 audit[3602]: AVC avc: denied { confidentiality } for pid=3602 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.923072 kernel: NFS: Registering the id_resolver key type Feb 9 19:47:09.923220 kernel: Key type id_resolver registered Feb 9 19:47:09.923238 kernel: Key type id_legacy registered Feb 9 19:47:09.716000 audit[3602]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f4598655010 a1=1d3cc4 a2=55e9fb0fc2b0 a3=5 items=0 ppid=68 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.716000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.932756 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.932817 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.932837 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.933779 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.933821 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.934816 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.934855 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.935904 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.935954 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.936822 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.936863 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.937871 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.937898 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.938873 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.938909 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.939857 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.939879 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.940861 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.940900 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.941839 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.941865 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.942844 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.942869 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.943845 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.943884 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.928000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:47:09.944831 kernel: Failed to create system directory rpcgss Feb 9 19:47:09.928000 audit[3604]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f006a4f5010 a1=4f524 a2=555c95c3f2b0 a3=5 items=0 ppid=68 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.928000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 9 19:47:09.955306 nfsidmap[3612]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:47:09.957546 nfsidmap[3616]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:47:09.965000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:47:09.965000 audit[1271]: AVC avc: denied { watch_reads } for pid=1271 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:47:09.965000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:47:09.965000 audit[1271]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e2dcf2b0d0 a2=10 a3=e577bdcb1df5a9d items=0 ppid=1 pid=1271 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.965000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:47:09.965000 audit[1271]: AVC avc: denied { watch_reads } for pid=1271 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:47:09.965000 audit[1271]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e2dcf2b0d0 a2=10 a3=e577bdcb1df5a9d items=0 ppid=1 pid=1271 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.965000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:47:09.965000 audit[1271]: AVC avc: denied { watch_reads } for pid=1271 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:47:09.965000 audit[1271]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e2dcf2b0d0 a2=10 a3=e577bdcb1df5a9d items=0 ppid=1 pid=1271 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:09.965000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:47:10.189077 env[1193]: time="2024-02-09T19:47:10.189035626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:166aa272-29ae-44d6-80d6-6805403d5b77,Namespace:default,Attempt:0,}" Feb 9 19:47:10.194223 kubelet[1543]: E0209 19:47:10.194195 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:10.571910 systemd-networkd[1073]: cali5ec59c6bf6e: Link UP Feb 9 19:47:10.573454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:47:10.573530 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 9 19:47:10.573742 systemd-networkd[1073]: cali5ec59c6bf6e: Gained carrier Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.518 [INFO][3619] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-test--pod--1-eth0 default 166aa272-29ae-44d6-80d6-6805403d5b77 1276 0 2024-02-09 19:46:52 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.71 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.518 [INFO][3619] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.540 [INFO][3633] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" HandleID="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Workload="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.549 [INFO][3633] ipam_plugin.go 268: Auto assigning IP ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" HandleID="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Workload="10.0.0.71-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050520), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"test-pod-1", "timestamp":"2024-02-09 19:47:10.540765934 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.550 [INFO][3633] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.550 [INFO][3633] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.550 [INFO][3633] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.551 [INFO][3633] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.554 [INFO][3633] ipam.go 372: Looking up existing affinities for host host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.558 [INFO][3633] ipam.go 489: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.559 [INFO][3633] ipam.go 155: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.561 [INFO][3633] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.561 [INFO][3633] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.562 [INFO][3633] ipam.go 1682: Creating new handle: k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.564 [INFO][3633] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.568 [INFO][3633] ipam.go 1216: Successfully claimed IPs: [192.168.77.69/26] block=192.168.77.64/26 handle="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.568 [INFO][3633] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.69/26] handle="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" host="10.0.0.71" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.568 [INFO][3633] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.568 [INFO][3633] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.77.69/26] IPv6=[] ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" HandleID="k8s-pod-network.104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Workload="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.570 [INFO][3619] k8s.go 385: Populated endpoint ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"166aa272-29ae-44d6-80d6-6805403d5b77", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:10.579765 env[1193]: 2024-02-09 19:47:10.570 [INFO][3619] k8s.go 386: Calico CNI using IPs: [192.168.77.69/32] ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.580393 env[1193]: 2024-02-09 19:47:10.570 [INFO][3619] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.580393 env[1193]: 2024-02-09 19:47:10.574 [INFO][3619] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.580393 env[1193]: 2024-02-09 19:47:10.574 [INFO][3619] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"166aa272-29ae-44d6-80d6-6805403d5b77", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1a:a5:12:ff:04:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:10.580393 env[1193]: 2024-02-09 19:47:10.578 [INFO][3619] k8s.go 491: Wrote updated endpoint to datastore ContainerID="104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Feb 9 19:47:10.589000 audit[3659]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:47:10.589000 audit[3659]: SYSCALL arch=c000003e syscall=46 success=yes exit=20268 a0=3 a1=7ffe1992cf60 a2=0 a3=7ffe1992cf4c items=0 ppid=2423 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:10.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:47:10.592614 env[1193]: time="2024-02-09T19:47:10.592557241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:47:10.592614 env[1193]: time="2024-02-09T19:47:10.592592147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:47:10.592614 env[1193]: time="2024-02-09T19:47:10.592601394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:47:10.593188 env[1193]: time="2024-02-09T19:47:10.593141366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f pid=3662 runtime=io.containerd.runc.v2 Feb 9 19:47:10.613156 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:47:10.634906 env[1193]: time="2024-02-09T19:47:10.634859226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:166aa272-29ae-44d6-80d6-6805403d5b77,Namespace:default,Attempt:0,} returns sandbox id \"104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f\"" Feb 9 19:47:10.636549 env[1193]: time="2024-02-09T19:47:10.636515883Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:47:11.027732 env[1193]: time="2024-02-09T19:47:11.027594646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:11.029546 env[1193]: time="2024-02-09T19:47:11.029526750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:11.030980 env[1193]: time="2024-02-09T19:47:11.030961671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:11.032613 env[1193]: time="2024-02-09T19:47:11.032567935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:11.033129 env[1193]: time="2024-02-09T19:47:11.033088692Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:47:11.034662 env[1193]: time="2024-02-09T19:47:11.034633640Z" level=info msg="CreateContainer within sandbox \"104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:47:11.045510 env[1193]: time="2024-02-09T19:47:11.045473254Z" level=info msg="CreateContainer within sandbox \"104d7f89aba135f7f3d70f79029a7046259e99457f0a9a0f30d1326e450c4b2f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5bcf4896fe8ad819efdb8cdf44b195543b8f4dc9f5e5fe66bdf98751b0b98289\"" Feb 9 19:47:11.045903 env[1193]: time="2024-02-09T19:47:11.045879004Z" level=info msg="StartContainer for \"5bcf4896fe8ad819efdb8cdf44b195543b8f4dc9f5e5fe66bdf98751b0b98289\"" Feb 9 19:47:11.090883 env[1193]: time="2024-02-09T19:47:11.090836158Z" level=info msg="StartContainer for \"5bcf4896fe8ad819efdb8cdf44b195543b8f4dc9f5e5fe66bdf98751b0b98289\" returns successfully" Feb 9 19:47:11.194868 kubelet[1543]: E0209 19:47:11.194825 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:11.447041 kubelet[1543]: I0209 19:47:11.447002 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017407804e+09 pod.CreationTimestamp="2024-02-09 19:46:52 +0000 UTC" firstStartedPulling="2024-02-09 19:47:10.636246358 +0000 UTC m=+87.646192424" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:47:11.446936294 +0000 UTC m=+88.456882360" watchObservedRunningTime="2024-02-09 19:47:11.446971971 +0000 UTC m=+88.456918037" Feb 9 19:47:11.741872 systemd-networkd[1073]: cali5ec59c6bf6e: Gained IPv6LL Feb 9 19:47:12.195018 kubelet[1543]: E0209 19:47:12.194952 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:13.195385 kubelet[1543]: E0209 19:47:13.195337 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:47:14.196171 kubelet[1543]: E0209 19:47:14.196128 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"