Feb 8 23:38:04.021908 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:38:04.021931 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:38:04.021943 kernel: BIOS-provided physical RAM map: Feb 8 23:38:04.021949 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:38:04.021954 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:38:04.021962 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:38:04.021972 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:38:04.021978 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:38:04.021986 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:38:04.021993 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:38:04.022001 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:38:04.022008 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:38:04.022017 kernel: NX (Execute Disable) protection: active Feb 8 23:38:04.022023 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:38:04.022035 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:38:04.022043 kernel: random: crng init done Feb 8 23:38:04.022052 kernel: SMBIOS 3.1.0 present. Feb 8 23:38:04.022060 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:38:04.022066 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:38:04.022075 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:38:04.022081 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:38:04.022087 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:38:04.022098 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:38:04.022104 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:38:04.022111 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:38:04.022120 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:38:04.022127 kernel: tsc: Detected 2593.906 MHz processor Feb 8 23:38:04.022134 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:38:04.022143 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:38:04.022149 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:38:04.022156 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:38:04.022165 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:38:04.022173 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:38:04.022181 kernel: Using GB pages for direct mapping Feb 8 23:38:04.022189 kernel: Secure boot disabled Feb 8 23:38:04.022195 kernel: ACPI: Early table checksum verification disabled Feb 8 23:38:04.022204 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:38:04.022211 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022217 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022227 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:38:04.022238 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:38:04.022248 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022255 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022262 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022272 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022279 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022290 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022297 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:38:04.022304 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:38:04.022314 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:38:04.022321 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:38:04.022328 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:38:04.022337 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:38:04.022344 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:38:04.022357 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:38:04.022364 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:38:04.022371 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:38:04.022380 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:38:04.022388 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:38:04.022397 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:38:04.022406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:38:04.022414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:38:04.022424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:38:04.022434 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:38:04.022443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:38:04.022451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:38:04.022458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:38:04.022468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:38:04.022475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:38:04.022482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:38:04.022492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:38:04.022499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:38:04.022511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:38:04.022518 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:38:04.022528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:38:04.022537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:38:04.022548 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:38:04.022556 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 8 23:38:04.022565 kernel: Zone ranges: Feb 8 23:38:04.022574 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:38:04.022583 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:38:04.022595 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:38:04.022601 kernel: Movable zone start for each node Feb 8 23:38:04.022610 kernel: Early memory node ranges Feb 8 23:38:04.022618 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:38:04.023648 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:38:04.023662 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:38:04.023669 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:38:04.023678 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:38:04.023686 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:38:04.023699 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:38:04.023705 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:38:04.023714 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:38:04.023722 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:38:04.023732 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:38:04.023739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:38:04.023747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:38:04.023756 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:38:04.023764 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:38:04.023775 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:38:04.023782 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:38:04.023792 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:38:04.023800 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:38:04.023809 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:38:04.023816 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:38:04.023826 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:38:04.023834 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:38:04.023843 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:38:04.023852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:38:04.023862 kernel: Policy zone: Normal Feb 8 23:38:04.023872 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:38:04.023881 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:38:04.023888 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:38:04.023898 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:38:04.023906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:38:04.023916 kernel: Memory: 8081196K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306004K reserved, 0K cma-reserved) Feb 8 23:38:04.023925 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:38:04.023935 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:38:04.023952 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:38:04.023961 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:38:04.023971 kernel: rcu: RCU event tracing is enabled. Feb 8 23:38:04.023980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:38:04.023990 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:38:04.023997 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:38:04.024005 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:38:04.024015 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:38:04.024024 kernel: Using NULL legacy PIC Feb 8 23:38:04.024034 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:38:04.024042 kernel: Console: colour dummy device 80x25 Feb 8 23:38:04.024052 kernel: printk: console [tty1] enabled Feb 8 23:38:04.024062 kernel: printk: console [ttyS0] enabled Feb 8 23:38:04.024070 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:38:04.024080 kernel: ACPI: Core revision 20210730 Feb 8 23:38:04.024089 kernel: Failed to register legacy timer interrupt Feb 8 23:38:04.024099 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:38:04.024106 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:38:04.024115 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 8 23:38:04.024124 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:38:04.024135 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:38:04.024142 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:38:04.024150 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:38:04.024159 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:38:04.024171 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:38:04.024179 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:38:04.024187 kernel: RETBleed: Vulnerable Feb 8 23:38:04.024196 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:38:04.024206 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:38:04.024214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:38:04.024225 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:38:04.024234 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:38:04.024244 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:38:04.024252 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:38:04.024263 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:38:04.024271 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:38:04.024281 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:38:04.024289 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:38:04.024297 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:38:04.024306 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:38:04.024316 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:38:04.024323 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:38:04.024331 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:38:04.024341 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:38:04.024349 kernel: LSM: Security Framework initializing Feb 8 23:38:04.024358 kernel: SELinux: Initializing. Feb 8 23:38:04.024367 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:38:04.024377 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:38:04.024386 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:38:04.024395 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:38:04.024402 kernel: signal: max sigframe size: 3632 Feb 8 23:38:04.024413 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:38:04.024421 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:38:04.024430 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:38:04.024437 kernel: x86: Booting SMP configuration: Feb 8 23:38:04.024447 kernel: .... node #0, CPUs: #1 Feb 8 23:38:04.024458 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:38:04.024469 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:38:04.024476 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:38:04.024483 kernel: smpboot: Max logical packages: 1 Feb 8 23:38:04.024493 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:38:04.024501 kernel: devtmpfs: initialized Feb 8 23:38:04.024511 kernel: x86/mm: Memory block size: 128MB Feb 8 23:38:04.024519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:38:04.024530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:38:04.024538 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:38:04.024549 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:38:04.024556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:38:04.024565 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:38:04.024574 kernel: audit: type=2000 audit(1707435482.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:38:04.024583 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:38:04.024591 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:38:04.024599 kernel: cpuidle: using governor menu Feb 8 23:38:04.024611 kernel: ACPI: bus type PCI registered Feb 8 23:38:04.038641 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:38:04.038666 kernel: dca service started, version 1.12.1 Feb 8 23:38:04.038679 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:38:04.038691 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:38:04.038700 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:38:04.038711 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:38:04.038721 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:38:04.038730 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:38:04.038744 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:38:04.038752 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:38:04.038760 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:38:04.038770 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:38:04.038777 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:38:04.038788 kernel: ACPI: Interpreter enabled Feb 8 23:38:04.038795 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:38:04.038803 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:38:04.038813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:38:04.038823 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:38:04.038833 kernel: iommu: Default domain type: Translated Feb 8 23:38:04.038841 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:38:04.038850 kernel: vgaarb: loaded Feb 8 23:38:04.038858 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:38:04.038866 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:38:04.038876 kernel: PTP clock support registered Feb 8 23:38:04.038884 kernel: Registered efivars operations Feb 8 23:38:04.038893 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:38:04.038902 kernel: PCI: System does not support PCI Feb 8 23:38:04.038912 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:38:04.038921 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:38:04.038929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:38:04.038940 kernel: pnp: PnP ACPI init Feb 8 23:38:04.038947 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:38:04.038956 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:38:04.038965 kernel: NET: Registered PF_INET protocol family Feb 8 23:38:04.038972 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:38:04.038984 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:38:04.038992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:38:04.039002 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:38:04.039010 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:38:04.039017 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:38:04.039024 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:38:04.039032 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:38:04.039039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:38:04.039051 kernel: NET: Registered PF_XDP protocol family Feb 8 23:38:04.039060 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:38:04.039069 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:38:04.039078 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:38:04.039086 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:38:04.039096 kernel: Initialise system trusted keyrings Feb 8 23:38:04.039104 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:38:04.039115 kernel: Key type asymmetric registered Feb 8 23:38:04.039123 kernel: Asymmetric key parser 'x509' registered Feb 8 23:38:04.039132 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:38:04.039144 kernel: io scheduler mq-deadline registered Feb 8 23:38:04.039154 kernel: io scheduler kyber registered Feb 8 23:38:04.039165 kernel: io scheduler bfq registered Feb 8 23:38:04.039172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:38:04.039181 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:38:04.039190 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:38:04.039198 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:38:04.039205 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:38:04.039346 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:38:04.039433 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:38:03 UTC (1707435483) Feb 8 23:38:04.039513 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:38:04.039525 kernel: fail to initialize ptp_kvm Feb 8 23:38:04.039534 kernel: intel_pstate: CPU model not supported Feb 8 23:38:04.039544 kernel: efifb: probing for efifb Feb 8 23:38:04.039552 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:38:04.039562 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:38:04.039572 kernel: efifb: scrolling: redraw Feb 8 23:38:04.039584 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:38:04.039595 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:38:04.039602 kernel: fb0: EFI VGA frame buffer device Feb 8 23:38:04.039610 kernel: pstore: Registered efi as persistent store backend Feb 8 23:38:04.039620 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:38:04.039641 kernel: Segment Routing with IPv6 Feb 8 23:38:04.039648 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:38:04.039657 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:38:04.039666 kernel: Key type dns_resolver registered Feb 8 23:38:04.039676 kernel: IPI shorthand broadcast: enabled Feb 8 23:38:04.039686 kernel: sched_clock: Marking stable (815808200, 25144800)->(1080090500, -239137500) Feb 8 23:38:04.039693 kernel: registered taskstats version 1 Feb 8 23:38:04.039703 kernel: Loading compiled-in X.509 certificates Feb 8 23:38:04.039711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:38:04.039719 kernel: Key type .fscrypt registered Feb 8 23:38:04.039729 kernel: Key type fscrypt-provisioning registered Feb 8 23:38:04.039737 kernel: pstore: Using crash dump compression: deflate Feb 8 23:38:04.039746 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:38:04.039756 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:38:04.039764 kernel: ima: No architecture policies found Feb 8 23:38:04.039771 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:38:04.039782 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:38:04.039789 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:38:04.039799 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:38:04.039807 kernel: Run /init as init process Feb 8 23:38:04.039814 kernel: with arguments: Feb 8 23:38:04.039822 kernel: /init Feb 8 23:38:04.039832 kernel: with environment: Feb 8 23:38:04.039839 kernel: HOME=/ Feb 8 23:38:04.039846 kernel: TERM=linux Feb 8 23:38:04.039853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:38:04.039862 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:38:04.039872 systemd[1]: Detected virtualization microsoft. Feb 8 23:38:04.039880 systemd[1]: Detected architecture x86-64. Feb 8 23:38:04.039889 systemd[1]: Running in initrd. Feb 8 23:38:04.039899 systemd[1]: No hostname configured, using default hostname. Feb 8 23:38:04.039908 systemd[1]: Hostname set to . Feb 8 23:38:04.039916 systemd[1]: Initializing machine ID from random generator. Feb 8 23:38:04.039927 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:38:04.039934 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:38:04.039945 systemd[1]: Reached target cryptsetup.target. Feb 8 23:38:04.039952 systemd[1]: Reached target paths.target. Feb 8 23:38:04.039963 systemd[1]: Reached target slices.target. Feb 8 23:38:04.039973 systemd[1]: Reached target swap.target. Feb 8 23:38:04.039983 systemd[1]: Reached target timers.target. Feb 8 23:38:04.039992 systemd[1]: Listening on iscsid.socket. Feb 8 23:38:04.040000 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:38:04.040008 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:38:04.040019 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:38:04.040027 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:38:04.040039 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:38:04.040050 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:38:04.040060 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:38:04.040069 systemd[1]: Reached target sockets.target. Feb 8 23:38:04.040078 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:38:04.040098 systemd[1]: Finished network-cleanup.service. Feb 8 23:38:04.040106 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:38:04.040114 systemd[1]: Starting systemd-journald.service... Feb 8 23:38:04.040126 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:38:04.040136 systemd[1]: Starting systemd-resolved.service... Feb 8 23:38:04.040147 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:38:04.040155 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:38:04.040163 kernel: audit: type=1130 audit(1707435484.022:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.040174 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:38:04.040186 systemd-journald[183]: Journal started Feb 8 23:38:04.040234 systemd-journald[183]: Runtime Journal (/run/log/journal/54092c25382a45b5a1e58f266e45ae9d) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:38:04.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.005996 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:38:04.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.058353 systemd[1]: Started systemd-journald.service. Feb 8 23:38:04.058394 kernel: audit: type=1130 audit(1707435484.043:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.075317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:38:04.070097 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:38:04.073448 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:38:04.092343 kernel: Bridge firewalling registered Feb 8 23:38:04.092368 kernel: audit: type=1130 audit(1707435484.069:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.078192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:38:04.101835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:38:04.106316 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:38:04.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.122344 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:38:04.132330 kernel: audit: type=1130 audit(1707435484.072:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.122354 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:38:04.122389 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:38:04.125092 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:38:04.125934 systemd[1]: Started systemd-resolved.service. Feb 8 23:38:04.131145 systemd[1]: Reached target nss-lookup.target. Feb 8 23:38:04.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.157283 kernel: audit: type=1130 audit(1707435484.101:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.157318 kernel: audit: type=1130 audit(1707435484.129:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.160253 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:38:04.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.164310 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:38:04.180749 kernel: audit: type=1130 audit(1707435484.162:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.188643 kernel: SCSI subsystem initialized Feb 8 23:38:04.190294 dracut-cmdline[200]: dracut-dracut-053 Feb 8 23:38:04.194502 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:38:04.227940 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:38:04.227984 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:38:04.233738 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:38:04.237936 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:38:04.239725 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:38:04.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.255566 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:38:04.263426 kernel: audit: type=1130 audit(1707435484.247:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.268638 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:38:04.270719 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:38:04.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.285641 kernel: audit: type=1130 audit(1707435484.272:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.294643 kernel: iscsi: registered transport (tcp) Feb 8 23:38:04.318966 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:38:04.319024 kernel: QLogic iSCSI HBA Driver Feb 8 23:38:04.347963 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:38:04.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.353205 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:38:04.402646 kernel: raid6: avx512x4 gen() 18561 MB/s Feb 8 23:38:04.423641 kernel: raid6: avx512x4 xor() 8003 MB/s Feb 8 23:38:04.443634 kernel: raid6: avx512x2 gen() 18581 MB/s Feb 8 23:38:04.463635 kernel: raid6: avx512x2 xor() 29844 MB/s Feb 8 23:38:04.483635 kernel: raid6: avx512x1 gen() 18541 MB/s Feb 8 23:38:04.503635 kernel: raid6: avx512x1 xor() 26983 MB/s Feb 8 23:38:04.522646 kernel: raid6: avx2x4 gen() 18536 MB/s Feb 8 23:38:04.542635 kernel: raid6: avx2x4 xor() 8091 MB/s Feb 8 23:38:04.561632 kernel: raid6: avx2x2 gen() 18530 MB/s Feb 8 23:38:04.581633 kernel: raid6: avx2x2 xor() 22281 MB/s Feb 8 23:38:04.601636 kernel: raid6: avx2x1 gen() 14118 MB/s Feb 8 23:38:04.620639 kernel: raid6: avx2x1 xor() 19459 MB/s Feb 8 23:38:04.640637 kernel: raid6: sse2x4 gen() 11734 MB/s Feb 8 23:38:04.661636 kernel: raid6: sse2x4 xor() 7410 MB/s Feb 8 23:38:04.681632 kernel: raid6: sse2x2 gen() 12952 MB/s Feb 8 23:38:04.701632 kernel: raid6: sse2x2 xor() 7494 MB/s Feb 8 23:38:04.721636 kernel: raid6: sse2x1 gen() 11663 MB/s Feb 8 23:38:04.743814 kernel: raid6: sse2x1 xor() 5863 MB/s Feb 8 23:38:04.743834 kernel: raid6: using algorithm avx512x2 gen() 18581 MB/s Feb 8 23:38:04.743844 kernel: raid6: .... xor() 29844 MB/s, rmw enabled Feb 8 23:38:04.747117 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:38:04.765650 kernel: xor: automatically using best checksumming function avx Feb 8 23:38:04.860648 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:38:04.868064 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:38:04.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.872000 audit: BPF prog-id=7 op=LOAD Feb 8 23:38:04.872000 audit: BPF prog-id=8 op=LOAD Feb 8 23:38:04.873154 systemd[1]: Starting systemd-udevd.service... Feb 8 23:38:04.887113 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 8 23:38:04.891761 systemd[1]: Started systemd-udevd.service. Feb 8 23:38:04.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.899772 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:38:04.915330 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 8 23:38:04.945097 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:38:04.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.948178 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:38:04.982723 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:38:04.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:05.030644 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:38:05.055647 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:38:05.078642 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:38:05.088648 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:38:05.088695 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:38:05.093960 kernel: scsi host0: storvsc_host_t Feb 8 23:38:05.098632 kernel: scsi host1: storvsc_host_t Feb 8 23:38:05.098824 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:38:05.107649 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:38:05.116839 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:38:05.116916 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:38:05.116935 kernel: AES CTR mode by8 optimization enabled Feb 8 23:38:05.120212 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:38:05.136647 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:38:05.158490 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:38:05.158773 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:38:05.167660 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:38:05.167859 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:38:05.167878 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:38:05.167998 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:38:05.178926 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:38:05.183642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:38:05.188648 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:38:05.195636 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:38:05.195838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:38:05.198692 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:38:05.307975 kernel: hv_netvsc 002248a0-0d66-0022-48a0-0d66002248a0 eth0: VF slot 1 added Feb 8 23:38:05.316640 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:38:05.328470 kernel: hv_pci 69a98b4f-4770-4fd4-a441-180bd7f5eea0: PCI VMBus probing: Using version 0x10004 Feb 8 23:38:05.328686 kernel: hv_pci 69a98b4f-4770-4fd4-a441-180bd7f5eea0: PCI host bridge to bus 4770:00 Feb 8 23:38:05.337616 kernel: pci_bus 4770:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:38:05.337802 kernel: pci_bus 4770:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:38:05.347938 kernel: pci 4770:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:38:05.355662 kernel: pci 4770:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:38:05.371785 kernel: pci 4770:00:02.0: enabling Extended Tags Feb 8 23:38:05.388414 kernel: pci 4770:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4770:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:38:05.388666 kernel: pci_bus 4770:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:38:05.388795 kernel: pci 4770:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:38:05.485645 kernel: mlx5_core 4770:00:02.0: firmware version: 14.30.1224 Feb 8 23:38:05.643649 kernel: mlx5_core 4770:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:38:05.677048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:38:05.732651 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (454) Feb 8 23:38:05.746041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:38:05.793549 kernel: mlx5_core 4770:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:38:05.793814 kernel: mlx5_core 4770:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 8 23:38:05.801642 kernel: hv_netvsc 002248a0-0d66-0022-48a0-0d66002248a0 eth0: VF registering: eth1 Feb 8 23:38:05.801837 kernel: mlx5_core 4770:00:02.0 eth1: joined to eth0 Feb 8 23:38:05.816643 kernel: mlx5_core 4770:00:02.0 enP18288s1: renamed from eth1 Feb 8 23:38:05.904164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:38:05.909597 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:38:05.916558 systemd[1]: Starting disk-uuid.service... Feb 8 23:38:05.944575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:38:06.938435 disk-uuid[561]: The operation has completed successfully. Feb 8 23:38:06.940875 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:38:07.001934 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:38:07.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.002045 systemd[1]: Finished disk-uuid.service. Feb 8 23:38:07.017452 systemd[1]: Starting verity-setup.service... Feb 8 23:38:07.082831 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:38:07.271152 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:38:07.277621 systemd[1]: Finished verity-setup.service. Feb 8 23:38:07.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.282017 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:38:07.355660 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:38:07.356066 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:38:07.358128 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:38:07.358877 systemd[1]: Starting ignition-setup.service... Feb 8 23:38:07.366468 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:38:07.389984 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:38:07.390045 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:38:07.390065 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:38:07.436471 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:38:07.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.440000 audit: BPF prog-id=9 op=LOAD Feb 8 23:38:07.442169 systemd[1]: Starting systemd-networkd.service... Feb 8 23:38:07.466875 systemd-networkd[831]: lo: Link UP Feb 8 23:38:07.466885 systemd-networkd[831]: lo: Gained carrier Feb 8 23:38:07.470952 systemd-networkd[831]: Enumeration completed Feb 8 23:38:07.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.471048 systemd[1]: Started systemd-networkd.service. Feb 8 23:38:07.475136 systemd[1]: Reached target network.target. Feb 8 23:38:07.477523 systemd-networkd[831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:38:07.484751 systemd[1]: Starting iscsiuio.service... Feb 8 23:38:07.492530 systemd[1]: Started iscsiuio.service. Feb 8 23:38:07.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.496969 systemd[1]: Starting iscsid.service... Feb 8 23:38:07.504219 iscsid[843]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:38:07.504219 iscsid[843]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:38:07.504219 iscsid[843]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:38:07.504219 iscsid[843]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:38:07.504219 iscsid[843]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:38:07.504219 iscsid[843]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:38:07.504219 iscsid[843]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:38:07.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.506228 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:38:07.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.506571 systemd[1]: Started iscsid.service. Feb 8 23:38:07.526685 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:38:07.538528 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:38:07.542082 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:38:07.544067 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:38:07.545025 systemd[1]: Reached target remote-fs.target. Feb 8 23:38:07.548170 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:38:07.559637 kernel: mlx5_core 4770:00:02.0 enP18288s1: Link up Feb 8 23:38:07.572025 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:38:07.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.630653 kernel: hv_netvsc 002248a0-0d66-0022-48a0-0d66002248a0 eth0: Data path switched to VF: enP18288s1 Feb 8 23:38:07.636640 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:38:07.636832 systemd-networkd[831]: enP18288s1: Link UP Feb 8 23:38:07.639691 systemd-networkd[831]: eth0: Link UP Feb 8 23:38:07.642578 systemd-networkd[831]: eth0: Gained carrier Feb 8 23:38:07.645837 systemd-networkd[831]: enP18288s1: Gained carrier Feb 8 23:38:07.668727 systemd-networkd[831]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:38:07.704588 systemd[1]: Finished ignition-setup.service. Feb 8 23:38:07.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.709370 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:38:09.692841 systemd-networkd[831]: eth0: Gained IPv6LL Feb 8 23:38:10.758526 ignition[858]: Ignition 2.14.0 Feb 8 23:38:10.758542 ignition[858]: Stage: fetch-offline Feb 8 23:38:10.758656 ignition[858]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:10.758708 ignition[858]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:10.912111 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:10.912309 ignition[858]: parsed url from cmdline: "" Feb 8 23:38:10.912313 ignition[858]: no config URL provided Feb 8 23:38:10.912319 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:38:10.942355 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:38:10.942387 kernel: audit: type=1130 audit(1707435490.923:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:10.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:10.919079 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:38:10.912327 ignition[858]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:38:10.924235 systemd[1]: Starting ignition-fetch.service... Feb 8 23:38:10.912333 ignition[858]: failed to fetch config: resource requires networking Feb 8 23:38:10.913550 ignition[858]: Ignition finished successfully Feb 8 23:38:10.932514 ignition[864]: Ignition 2.14.0 Feb 8 23:38:10.932520 ignition[864]: Stage: fetch Feb 8 23:38:10.932621 ignition[864]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:10.932657 ignition[864]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:10.938384 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:10.938814 ignition[864]: parsed url from cmdline: "" Feb 8 23:38:10.938821 ignition[864]: no config URL provided Feb 8 23:38:10.938830 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:38:10.938848 ignition[864]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:38:10.938914 ignition[864]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:38:11.044571 ignition[864]: GET result: OK Feb 8 23:38:11.044596 ignition[864]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:38:11.168058 ignition[864]: opening config device: "/dev/sr0" Feb 8 23:38:11.168536 ignition[864]: getting drive status for "/dev/sr0" Feb 8 23:38:11.168644 ignition[864]: drive status: OK Feb 8 23:38:11.168687 ignition[864]: mounting config device Feb 8 23:38:11.168700 ignition[864]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure850398891" Feb 8 23:38:11.194183 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:38:11.193404 ignition[864]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure850398891" Feb 8 23:38:11.193415 ignition[864]: checking for config drive Feb 8 23:38:11.195816 systemd[1]: tmp-ignition\x2dazure850398891.mount: Deactivated successfully. Feb 8 23:38:11.193758 ignition[864]: reading config Feb 8 23:38:11.194146 ignition[864]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure850398891" Feb 8 23:38:11.194244 ignition[864]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure850398891" Feb 8 23:38:11.194266 ignition[864]: config has been read from custom data Feb 8 23:38:11.194315 ignition[864]: parsing config with SHA512: 6067eebbe457ee5ff5f3d3c7b3bc71909f93bdf8e2484d0ab20cde1c8ea0d0725c5ccf4412018eed782b5a0b3cfe9f41d5407e1e649d5fcb9d6851ad3041fa26 Feb 8 23:38:11.243370 unknown[864]: fetched base config from "system" Feb 8 23:38:11.243392 unknown[864]: fetched base config from "system" Feb 8 23:38:11.243404 unknown[864]: fetched user config from "azure" Feb 8 23:38:11.247255 ignition[864]: fetch: fetch complete Feb 8 23:38:11.247266 ignition[864]: fetch: fetch passed Feb 8 23:38:11.247308 ignition[864]: Ignition finished successfully Feb 8 23:38:11.255173 systemd[1]: Finished ignition-fetch.service. Feb 8 23:38:11.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.258018 systemd[1]: Starting ignition-kargs.service... Feb 8 23:38:11.272741 kernel: audit: type=1130 audit(1707435491.257:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.280734 ignition[873]: Ignition 2.14.0 Feb 8 23:38:11.280744 ignition[873]: Stage: kargs Feb 8 23:38:11.280878 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:11.280909 ignition[873]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:11.291187 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:11.294882 ignition[873]: kargs: kargs passed Feb 8 23:38:11.294934 ignition[873]: Ignition finished successfully Feb 8 23:38:11.298849 systemd[1]: Finished ignition-kargs.service. Feb 8 23:38:11.301619 systemd[1]: Starting ignition-disks.service... Feb 8 23:38:11.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.317092 ignition[879]: Ignition 2.14.0 Feb 8 23:38:11.322638 kernel: audit: type=1130 audit(1707435491.300:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.317101 ignition[879]: Stage: disks Feb 8 23:38:11.317212 ignition[879]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:11.317241 ignition[879]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:11.322440 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:11.328204 ignition[879]: disks: disks passed Feb 8 23:38:11.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.329477 systemd[1]: Finished ignition-disks.service. Feb 8 23:38:11.349115 kernel: audit: type=1130 audit(1707435491.331:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.328265 ignition[879]: Ignition finished successfully Feb 8 23:38:11.331772 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:38:11.345182 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:38:11.349077 systemd[1]: Reached target local-fs.target. Feb 8 23:38:11.350941 systemd[1]: Reached target sysinit.target. Feb 8 23:38:11.352838 systemd[1]: Reached target basic.target. Feb 8 23:38:11.358336 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:38:11.425695 systemd-fsck[887]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:38:11.431880 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:38:11.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.437851 systemd[1]: Mounting sysroot.mount... Feb 8 23:38:11.451930 kernel: audit: type=1130 audit(1707435491.435:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:11.461644 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:38:11.463797 systemd[1]: Mounted sysroot.mount. Feb 8 23:38:11.465758 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:38:11.502455 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:38:11.505803 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:38:11.510988 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:38:11.511033 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:38:11.519929 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:38:11.569822 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:38:11.576000 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:38:11.593646 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (897) Feb 8 23:38:11.602390 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:38:11.602428 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:38:11.602440 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:38:11.606371 initrd-setup-root[902]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:38:11.613775 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:38:11.629091 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:38:11.633813 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:38:11.652572 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:38:12.072160 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:38:12.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.077542 systemd[1]: Starting ignition-mount.service... Feb 8 23:38:12.092899 kernel: audit: type=1130 audit(1707435492.075:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.091101 systemd[1]: Starting sysroot-boot.service... Feb 8 23:38:12.115657 systemd[1]: Finished sysroot-boot.service. Feb 8 23:38:12.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.130038 kernel: audit: type=1130 audit(1707435492.116:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.132289 ignition[964]: INFO : Ignition 2.14.0 Feb 8 23:38:12.134573 ignition[964]: INFO : Stage: mount Feb 8 23:38:12.136556 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:12.139708 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:12.145400 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:12.145400 ignition[964]: INFO : mount: mount passed Feb 8 23:38:12.145400 ignition[964]: INFO : Ignition finished successfully Feb 8 23:38:12.144227 systemd[1]: Finished ignition-mount.service. Feb 8 23:38:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.166643 kernel: audit: type=1130 audit(1707435492.154:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:12.194632 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:38:12.194732 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:38:12.905066 coreos-metadata[896]: Feb 08 23:38:12.904 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:38:12.921317 coreos-metadata[896]: Feb 08 23:38:12.921 INFO Fetch successful Feb 8 23:38:12.953881 coreos-metadata[896]: Feb 08 23:38:12.953 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:38:12.968741 coreos-metadata[896]: Feb 08 23:38:12.968 INFO Fetch successful Feb 8 23:38:12.986647 coreos-metadata[896]: Feb 08 23:38:12.986 INFO wrote hostname ci-3510.3.2-a-df5d74ad8f to /sysroot/etc/hostname Feb 8 23:38:12.992447 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:38:12.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:13.009853 systemd[1]: Starting ignition-files.service... Feb 8 23:38:13.014370 kernel: audit: type=1130 audit(1707435492.994:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:13.021208 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:38:13.034642 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (975) Feb 8 23:38:13.044263 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:38:13.044303 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:38:13.044315 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:38:13.053148 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:38:13.067641 ignition[994]: INFO : Ignition 2.14.0 Feb 8 23:38:13.067641 ignition[994]: INFO : Stage: files Feb 8 23:38:13.071872 ignition[994]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:13.071872 ignition[994]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:13.086143 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:13.094550 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:38:13.098646 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:38:13.102598 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:38:13.166054 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:38:13.169814 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:38:13.180451 unknown[994]: wrote ssh authorized keys file for user: core Feb 8 23:38:13.183076 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:38:13.202914 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:38:13.207436 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:38:13.207436 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:38:13.216844 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:38:13.823315 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:38:14.005514 ignition[994]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:38:14.018773 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:38:14.018773 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:38:14.018773 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:38:14.496929 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:38:14.595294 ignition[994]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:38:14.608258 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:38:14.608258 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:38:14.623693 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:38:19.848512 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:38:20.062904 ignition[994]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:38:20.070739 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:38:20.070739 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:38:20.070739 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:38:20.199583 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:38:20.719784 ignition[994]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1109378606" Feb 8 23:38:20.733500 ignition[994]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1109378606": device or resource busy Feb 8 23:38:20.733500 ignition[994]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1109378606", trying btrfs: device or resource busy Feb 8 23:38:20.733500 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1109378606" Feb 8 23:38:20.815159 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (996) Feb 8 23:38:20.815182 kernel: audit: type=1130 audit(1707435500.782:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1109378606" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1109378606" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1109378606" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem694252310" Feb 8 23:38:20.815232 ignition[994]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem694252310": device or resource busy Feb 8 23:38:20.815232 ignition[994]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem694252310", trying btrfs: device or resource busy Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem694252310" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem694252310" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem694252310" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem694252310" Feb 8 23:38:20.815232 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:38:20.815232 ignition[994]: INFO : files: op(13): [started] processing unit "waagent.service" Feb 8 23:38:20.815232 ignition[994]: INFO : files: op(13): [finished] processing unit "waagent.service" Feb 8 23:38:20.753463 systemd[1]: mnt-oem1109378606.mount: Deactivated successfully. Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(15): [started] processing unit "containerd.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(15): [finished] processing unit "containerd.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:38:20.854788 ignition[994]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:38:20.977937 kernel: audit: type=1130 audit(1707435500.860:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.977974 kernel: audit: type=1131 audit(1707435500.860:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.771977 systemd[1]: mnt-oem694252310.mount: Deactivated successfully. Feb 8 23:38:20.993699 kernel: audit: type=1130 audit(1707435500.977:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.993842 ignition[994]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:38:20.993842 ignition[994]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:38:20.993842 ignition[994]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:38:20.993842 ignition[994]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:38:20.993842 ignition[994]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:38:20.993842 ignition[994]: INFO : files: files passed Feb 8 23:38:20.993842 ignition[994]: INFO : Ignition finished successfully Feb 8 23:38:20.778637 systemd[1]: Finished ignition-files.service. Feb 8 23:38:21.024240 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:38:21.050842 kernel: audit: type=1130 audit(1707435501.024:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.050875 kernel: audit: type=1131 audit(1707435501.024:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.798723 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:38:20.802630 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:38:20.808394 systemd[1]: Starting ignition-quench.service... Feb 8 23:38:20.843478 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:38:20.843576 systemd[1]: Finished ignition-quench.service. Feb 8 23:38:21.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.972479 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:38:21.085176 kernel: audit: type=1130 audit(1707435501.070:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:20.978038 systemd[1]: Reached target ignition-complete.target. Feb 8 23:38:21.001930 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:38:21.021979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:38:21.022082 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:38:21.024478 systemd[1]: Reached target initrd-fs.target. Feb 8 23:38:21.050850 systemd[1]: Reached target initrd.target. Feb 8 23:38:21.052652 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:38:21.123258 kernel: audit: type=1130 audit(1707435501.100:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.123290 kernel: audit: type=1131 audit(1707435501.110:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.053533 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:38:21.067903 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:38:21.082868 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:38:21.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.097281 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:38:21.153768 kernel: audit: type=1131 audit(1707435501.134:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.097379 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:38:21.112639 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:38:21.124587 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:38:21.128665 systemd[1]: Stopped target timers.target. Feb 8 23:38:21.130565 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:38:21.130614 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:38:21.134216 systemd[1]: Stopped target initrd.target. Feb 8 23:38:21.149423 systemd[1]: Stopped target basic.target. Feb 8 23:38:21.153777 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:38:21.157629 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:38:21.161596 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:38:21.165818 systemd[1]: Stopped target remote-fs.target. Feb 8 23:38:21.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.169459 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:38:21.173847 systemd[1]: Stopped target sysinit.target. Feb 8 23:38:21.179181 systemd[1]: Stopped target local-fs.target. Feb 8 23:38:21.182857 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:38:21.186763 systemd[1]: Stopped target swap.target. Feb 8 23:38:21.190782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:38:21.193031 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:38:21.197041 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:38:21.200868 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:38:21.202619 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:38:21.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.219180 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:38:21.222118 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:38:21.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.226936 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:38:21.229216 systemd[1]: Stopped ignition-files.service. Feb 8 23:38:21.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.233166 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:38:21.235663 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:38:21.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.240723 systemd[1]: Stopping ignition-mount.service... Feb 8 23:38:21.242765 systemd[1]: Stopping iscsiuio.service... Feb 8 23:38:21.258154 ignition[1033]: INFO : Ignition 2.14.0 Feb 8 23:38:21.258154 ignition[1033]: INFO : Stage: umount Feb 8 23:38:21.258154 ignition[1033]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:21.258154 ignition[1033]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:21.244489 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:38:21.263988 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:21.244553 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:38:21.247596 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:38:21.251161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:38:21.251240 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:38:21.274636 ignition[1033]: INFO : umount: umount passed Feb 8 23:38:21.274636 ignition[1033]: INFO : Ignition finished successfully Feb 8 23:38:21.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.288182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:38:21.288251 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:38:21.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.294931 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:38:21.297022 systemd[1]: Stopped iscsiuio.service. Feb 8 23:38:21.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.300559 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:38:21.302895 systemd[1]: Stopped ignition-mount.service. Feb 8 23:38:21.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.305031 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:38:21.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.305081 systemd[1]: Stopped ignition-disks.service. Feb 8 23:38:21.308699 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:38:21.308749 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:38:21.313062 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:38:21.313110 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:38:21.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.324662 systemd[1]: Stopped target network.target. Feb 8 23:38:21.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.326713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:38:21.326769 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:38:21.330596 systemd[1]: Stopped target paths.target. Feb 8 23:38:21.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.332568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:38:21.338519 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:38:21.339654 systemd[1]: Stopped target slices.target. Feb 8 23:38:21.340063 systemd[1]: Stopped target sockets.target. Feb 8 23:38:21.340498 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:38:21.340527 systemd[1]: Closed iscsid.socket. Feb 8 23:38:21.340978 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:38:21.341015 systemd[1]: Closed iscsiuio.socket. Feb 8 23:38:21.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.341392 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:38:21.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.341428 systemd[1]: Stopped ignition-setup.service. Feb 8 23:38:21.341973 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:38:21.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.342301 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:38:21.380000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:38:21.343840 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:38:21.362662 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:38:21.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.362750 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:38:21.364984 systemd-networkd[831]: eth0: DHCPv6 lease lost Feb 8 23:38:21.391000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:38:21.370414 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:38:21.370508 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:38:21.377083 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:38:21.377168 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:38:21.380318 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:38:21.380353 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:38:21.383916 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:38:21.383966 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:38:21.404126 systemd[1]: Stopping network-cleanup.service... Feb 8 23:38:21.407540 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:38:21.407600 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:38:21.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.419533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:38:21.419585 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:38:21.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.425541 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:38:21.425594 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:38:21.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.431897 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:38:21.435682 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:38:21.437849 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:38:21.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.442476 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:38:21.442546 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:38:21.447110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:38:21.447151 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:38:21.455172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:38:21.455229 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:38:21.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.459595 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:38:21.459649 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:38:21.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.463379 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:38:21.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.463425 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:38:21.466156 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:38:21.475612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:38:21.475674 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:38:21.478228 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:38:21.478321 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:38:21.520644 kernel: hv_netvsc 002248a0-0d66-0022-48a0-0d66002248a0 eth0: Data path switched from VF: enP18288s1 Feb 8 23:38:21.537319 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:38:21.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:21.537443 systemd[1]: Stopped network-cleanup.service. Feb 8 23:38:21.542187 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:38:21.547346 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:38:21.587003 systemd[1]: Switching root. Feb 8 23:38:21.589000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:38:21.589000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:38:21.589000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:38:21.591000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:38:21.591000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:38:21.616251 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:38:21.616322 iscsid[843]: iscsid shutting down. Feb 8 23:38:21.618064 systemd-journald[183]: Journal stopped Feb 8 23:38:36.096895 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:38:36.096920 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:38:36.096934 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:38:36.096943 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:38:36.096953 kernel: SELinux: policy capability open_perms=1 Feb 8 23:38:36.096964 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:38:36.096974 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:38:36.096987 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:38:36.096996 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:38:36.097006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:38:36.097014 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:38:36.097027 systemd[1]: Successfully loaded SELinux policy in 284.580ms. Feb 8 23:38:36.097041 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.398ms. Feb 8 23:38:36.097051 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:38:36.097067 systemd[1]: Detected virtualization microsoft. Feb 8 23:38:36.097078 systemd[1]: Detected architecture x86-64. Feb 8 23:38:36.097089 systemd[1]: Detected first boot. Feb 8 23:38:36.097099 systemd[1]: Hostname set to . Feb 8 23:38:36.097110 systemd[1]: Initializing machine ID from random generator. Feb 8 23:38:36.097124 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 8 23:38:36.097134 kernel: audit: type=1400 audit(1707435505.863:87): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:38:36.097145 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:38:36.097156 kernel: audit: type=1400 audit(1707435507.147:88): avc: denied { associate } for pid=1084 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:38:36.097168 kernel: audit: type=1300 audit(1707435507.147:88): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1067 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:36.097180 kernel: audit: type=1327 audit(1707435507.147:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:36.097191 kernel: audit: type=1400 audit(1707435507.155:89): avc: denied { associate } for pid=1084 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:38:36.097203 kernel: audit: type=1300 audit(1707435507.155:89): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1067 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:36.097212 kernel: audit: type=1307 audit(1707435507.155:89): cwd="/" Feb 8 23:38:36.097224 kernel: audit: type=1302 audit(1707435507.155:89): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:36.097234 kernel: audit: type=1302 audit(1707435507.155:89): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:36.097245 kernel: audit: type=1327 audit(1707435507.155:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:36.097257 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:38:36.097268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:38:36.097282 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:38:36.097292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:38:36.097305 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:38:36.097318 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:38:36.097330 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:38:36.097342 systemd[1]: Created slice system-getty.slice. Feb 8 23:38:36.097352 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:38:36.097367 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:38:36.097378 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:38:36.097389 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:38:36.097401 systemd[1]: Created slice user.slice. Feb 8 23:38:36.097411 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:38:36.097423 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:38:36.097438 systemd[1]: Set up automount boot.automount. Feb 8 23:38:36.097447 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:38:36.097460 systemd[1]: Reached target integritysetup.target. Feb 8 23:38:36.097471 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:38:36.097482 systemd[1]: Reached target remote-fs.target. Feb 8 23:38:36.097493 systemd[1]: Reached target slices.target. Feb 8 23:38:36.097505 systemd[1]: Reached target swap.target. Feb 8 23:38:36.097517 systemd[1]: Reached target torcx.target. Feb 8 23:38:36.097527 systemd[1]: Reached target veritysetup.target. Feb 8 23:38:36.097542 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:38:36.097554 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:38:36.097564 kernel: audit: type=1400 audit(1707435515.808:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:38:36.097575 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:38:36.097586 kernel: audit: type=1335 audit(1707435515.808:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:38:36.097598 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:38:36.097608 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:38:36.097638 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:38:36.097649 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:38:36.097661 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:38:36.097674 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:38:36.097687 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:38:36.097699 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:38:36.097711 systemd[1]: Mounting media.mount... Feb 8 23:38:36.097723 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:38:36.097734 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:38:36.097745 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:38:36.097758 systemd[1]: Mounting tmp.mount... Feb 8 23:38:36.097768 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:38:36.097781 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:38:36.097796 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:38:36.097816 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:38:36.097828 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:38:36.097840 systemd[1]: Starting modprobe@drm.service... Feb 8 23:38:36.097851 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:38:36.097863 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:38:36.097876 systemd[1]: Starting modprobe@loop.service... Feb 8 23:38:36.097886 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:38:36.097898 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 8 23:38:36.097913 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 8 23:38:36.097923 systemd[1]: Starting systemd-journald.service... Feb 8 23:38:36.097937 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:38:36.097949 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:38:36.097959 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:38:36.097971 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:38:36.097983 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:38:36.097995 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:38:36.098005 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:38:36.098019 systemd[1]: Mounted media.mount. Feb 8 23:38:36.098031 kernel: audit: type=1305 audit(1707435516.093:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:38:36.098046 systemd-journald[1178]: Journal started Feb 8 23:38:36.098091 systemd-journald[1178]: Runtime Journal (/run/log/journal/b27e076ba0d04fe8ba4a309bc144659d) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:38:35.808000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:38:36.093000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:38:36.129402 systemd[1]: Started systemd-journald.service. Feb 8 23:38:36.129495 kernel: audit: type=1300 audit(1707435516.093:92): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd7d1fdc60 a2=4000 a3=7ffd7d1fdcfc items=0 ppid=1 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:36.093000 audit[1178]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd7d1fdc60 a2=4000 a3=7ffd7d1fdcfc items=0 ppid=1 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:36.135266 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:38:36.137420 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:38:36.139738 systemd[1]: Mounted tmp.mount. Feb 8 23:38:36.141790 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:38:36.093000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:38:36.150376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:38:36.150594 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:38:36.152662 kernel: audit: type=1327 audit(1707435516.093:92): proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:38:36.153042 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:38:36.153250 systemd[1]: Finished modprobe@drm.service. Feb 8 23:38:36.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.168241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:38:36.168444 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:38:36.170631 kernel: audit: type=1130 audit(1707435516.133:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.170952 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:38:36.171142 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:38:36.173662 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:38:36.177207 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:38:36.179737 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:38:36.182548 systemd[1]: Reached target network-pre.target. Feb 8 23:38:36.194847 kernel: loop: module loaded Feb 8 23:38:36.186213 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:38:36.194761 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:38:36.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.200298 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:38:36.203797 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:38:36.219613 kernel: audit: type=1130 audit(1707435516.149:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.219676 kernel: fuse: init (API version 7.34) Feb 8 23:38:36.218099 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:38:36.219367 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:38:36.224332 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:38:36.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.227863 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:38:36.228011 systemd[1]: Finished modprobe@loop.service. Feb 8 23:38:36.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.252649 kernel: audit: type=1130 audit(1707435516.152:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.252698 kernel: audit: type=1131 audit(1707435516.152:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.255798 kernel: audit: type=1130 audit(1707435516.167:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.259334 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:38:36.270697 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:38:36.270918 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:38:36.273377 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:38:36.275835 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:38:36.278064 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:38:36.281797 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:38:36.284777 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:38:36.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.292125 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:38:36.300396 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:38:36.304752 systemd-journald[1178]: Time spent on flushing to /var/log/journal/b27e076ba0d04fe8ba4a309bc144659d is 22.286ms for 1121 entries. Feb 8 23:38:36.304752 systemd-journald[1178]: System Journal (/var/log/journal/b27e076ba0d04fe8ba4a309bc144659d) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:38:36.409236 systemd-journald[1178]: Received client request to flush runtime journal. Feb 8 23:38:36.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.339862 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:38:36.356816 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:38:36.414175 udevadm[1240]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:38:36.360919 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:38:36.410385 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:38:36.814026 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:38:36.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:36.818147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:38:37.119223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:38:37.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:37.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:37.407054 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:38:37.411517 systemd[1]: Starting systemd-udevd.service... Feb 8 23:38:37.429842 systemd-udevd[1248]: Using default interface naming scheme 'v252'. Feb 8 23:38:37.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:37.660373 systemd[1]: Started systemd-udevd.service. Feb 8 23:38:37.665231 systemd[1]: Starting systemd-networkd.service... Feb 8 23:38:37.704989 systemd[1]: Found device dev-ttyS0.device. Feb 8 23:38:37.772493 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:38:37.784000 audit[1251]: AVC avc: denied { confidentiality } for pid=1251 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:38:37.806645 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:38:37.830280 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:38:37.830365 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:38:37.843835 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:38:37.843927 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:38:37.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:37.850573 systemd[1]: Started systemd-userdbd.service. Feb 8 23:38:37.784000 audit[1251]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55af2290ee70 a1=f884 a2=7f19a572cbc5 a3=5 items=12 ppid=1248 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:37.784000 audit: CWD cwd="/" Feb 8 23:38:37.784000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=1 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=2 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=3 name=(null) inode=14303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=4 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.861649 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:38:37.784000 audit: PATH item=5 name=(null) inode=14304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=6 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=7 name=(null) inode=14305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=8 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=9 name=(null) inode=14306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=10 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PATH item=11 name=(null) inode=14307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:37.784000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:38:37.897834 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:38:37.900194 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:38:37.900215 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:38:37.908642 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:38:37.920587 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:38:37.920678 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:38:37.920710 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:38:38.145785 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1257) Feb 8 23:38:38.207953 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:38:38.207896 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 8 23:38:38.272155 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:38:38.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:38.276283 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:38:38.328312 systemd-networkd[1261]: lo: Link UP Feb 8 23:38:38.328322 systemd-networkd[1261]: lo: Gained carrier Feb 8 23:38:38.329084 systemd-networkd[1261]: Enumeration completed Feb 8 23:38:38.329239 systemd[1]: Started systemd-networkd.service. Feb 8 23:38:38.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:38.332951 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:38:38.356697 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:38:38.410778 kernel: mlx5_core 4770:00:02.0 enP18288s1: Link up Feb 8 23:38:38.448773 kernel: hv_netvsc 002248a0-0d66-0022-48a0-0d66002248a0 eth0: Data path switched to VF: enP18288s1 Feb 8 23:38:38.450354 systemd-networkd[1261]: enP18288s1: Link UP Feb 8 23:38:38.450521 systemd-networkd[1261]: eth0: Link UP Feb 8 23:38:38.450529 systemd-networkd[1261]: eth0: Gained carrier Feb 8 23:38:38.455085 systemd-networkd[1261]: enP18288s1: Gained carrier Feb 8 23:38:38.482862 systemd-networkd[1261]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:38:38.609629 lvm[1326]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:38:38.635987 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:38:38.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:38.638680 systemd[1]: Reached target cryptsetup.target. Feb 8 23:38:38.642578 systemd[1]: Starting lvm2-activation.service... Feb 8 23:38:38.649094 lvm[1329]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:38:38.677033 systemd[1]: Finished lvm2-activation.service. Feb 8 23:38:38.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:38.679637 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:38:38.681907 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:38:38.681945 systemd[1]: Reached target local-fs.target. Feb 8 23:38:38.683868 systemd[1]: Reached target machines.target. Feb 8 23:38:38.687704 systemd[1]: Starting ldconfig.service... Feb 8 23:38:38.690671 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:38:38.690797 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:38.692387 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:38:38.695776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:38:38.699563 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:38:38.701933 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:38:38.702040 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:38:38.703208 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:38:39.249404 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:38:39.340673 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1332 (bootctl) Feb 8 23:38:39.342301 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:38:39.401300 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:38:39.404494 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:38:39.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:39.406797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:38:39.831128 systemd-networkd[1261]: eth0: Gained IPv6LL Feb 8 23:38:39.836881 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:38:39.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.217863 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:38:40.218778 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:38:40.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.792769 systemd-fsck[1341]: fsck.fat 4.2 (2021-01-31) Feb 8 23:38:40.792769 systemd-fsck[1341]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:38:40.793074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:38:40.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.798705 systemd[1]: Mounting boot.mount... Feb 8 23:38:40.814363 systemd[1]: Mounted boot.mount. Feb 8 23:38:40.828378 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:38:40.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.949098 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:38:40.957000 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 8 23:38:40.957088 kernel: audit: type=1130 audit(1707435520.951:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.956330 systemd[1]: Starting audit-rules.service... Feb 8 23:38:40.969627 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:38:40.973364 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:38:40.979055 systemd[1]: Starting systemd-resolved.service... Feb 8 23:38:40.983321 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:38:40.989558 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:38:40.992374 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:38:40.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:40.997325 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:38:41.005922 kernel: audit: type=1130 audit(1707435520.993:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.011000 audit[1360]: SYSTEM_BOOT pid=1360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.016707 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:38:41.029827 kernel: audit: type=1127 audit(1707435521.011:132): pid=1360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.029922 kernel: audit: type=1130 audit(1707435521.026:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.127993 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:38:41.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.130882 systemd[1]: Reached target time-set.target. Feb 8 23:38:41.143398 kernel: audit: type=1130 audit(1707435521.129:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.203109 systemd-resolved[1358]: Positive Trust Anchors: Feb 8 23:38:41.203125 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:38:41.203169 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:38:41.253201 systemd-timesyncd[1359]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Feb 8 23:38:41.253385 systemd-timesyncd[1359]: Initial clock synchronization to Thu 2024-02-08 23:38:41.254635 UTC. Feb 8 23:38:41.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.282447 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:38:41.298763 kernel: audit: type=1130 audit(1707435521.284:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:41.307000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:38:41.308963 systemd[1]: Finished audit-rules.service. Feb 8 23:38:41.315034 augenrules[1376]: No rules Feb 8 23:38:41.307000 audit[1376]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff74e96690 a2=420 a3=0 items=0 ppid=1353 pid=1376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:41.333367 kernel: audit: type=1305 audit(1707435521.307:136): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:38:41.333439 kernel: audit: type=1300 audit(1707435521.307:136): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff74e96690 a2=420 a3=0 items=0 ppid=1353 pid=1376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:41.307000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:38:41.343016 kernel: audit: type=1327 audit(1707435521.307:136): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:38:41.377916 systemd-resolved[1358]: Using system hostname 'ci-3510.3.2-a-df5d74ad8f'. Feb 8 23:38:41.379822 systemd[1]: Started systemd-resolved.service. Feb 8 23:38:41.382267 systemd[1]: Reached target network.target. Feb 8 23:38:41.384112 systemd[1]: Reached target network-online.target. Feb 8 23:38:41.386138 systemd[1]: Reached target nss-lookup.target. Feb 8 23:38:47.408597 ldconfig[1331]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:38:47.416433 systemd[1]: Finished ldconfig.service. Feb 8 23:38:47.421209 systemd[1]: Starting systemd-update-done.service... Feb 8 23:38:47.441818 systemd[1]: Finished systemd-update-done.service. Feb 8 23:38:47.444632 systemd[1]: Reached target sysinit.target. Feb 8 23:38:47.446953 systemd[1]: Started motdgen.path. Feb 8 23:38:47.448811 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:38:47.451713 systemd[1]: Started logrotate.timer. Feb 8 23:38:47.453513 systemd[1]: Started mdadm.timer. Feb 8 23:38:47.455096 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:38:47.457129 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:38:47.457274 systemd[1]: Reached target paths.target. Feb 8 23:38:47.459048 systemd[1]: Reached target timers.target. Feb 8 23:38:47.461942 systemd[1]: Listening on dbus.socket. Feb 8 23:38:47.465172 systemd[1]: Starting docker.socket... Feb 8 23:38:47.469040 systemd[1]: Listening on sshd.socket. Feb 8 23:38:47.471312 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:47.471992 systemd[1]: Listening on docker.socket. Feb 8 23:38:47.473841 systemd[1]: Reached target sockets.target. Feb 8 23:38:47.475795 systemd[1]: Reached target basic.target. Feb 8 23:38:47.477954 systemd[1]: System is tainted: cgroupsv1 Feb 8 23:38:47.478124 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:38:47.478240 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:38:47.479459 systemd[1]: Starting containerd.service... Feb 8 23:38:47.482953 systemd[1]: Starting dbus.service... Feb 8 23:38:47.485927 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:38:47.489226 systemd[1]: Starting extend-filesystems.service... Feb 8 23:38:47.491260 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:38:47.492621 systemd[1]: Starting motdgen.service... Feb 8 23:38:47.496094 systemd[1]: Started nvidia.service. Feb 8 23:38:47.499052 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:38:47.502629 systemd[1]: Starting prepare-critools.service... Feb 8 23:38:47.506007 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:38:47.509939 systemd[1]: Starting sshd-keygen.service... Feb 8 23:38:47.514190 systemd[1]: Starting systemd-logind.service... Feb 8 23:38:47.516118 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:47.516207 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:38:47.517668 systemd[1]: Starting update-engine.service... Feb 8 23:38:47.520975 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:38:47.555199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:38:47.555496 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:38:47.602711 systemd-logind[1403]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:38:47.603241 systemd-logind[1403]: New seat seat0. Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda1 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda2 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda3 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found usr Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda4 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda6 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda7 Feb 8 23:38:47.624715 extend-filesystems[1392]: Found sda9 Feb 8 23:38:47.624715 extend-filesystems[1392]: Checking size of /dev/sda9 Feb 8 23:38:47.655622 jq[1391]: false Feb 8 23:38:47.655792 jq[1405]: true Feb 8 23:38:47.640755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:38:47.641069 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:38:47.660665 jq[1430]: true Feb 8 23:38:47.666235 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:38:47.666536 systemd[1]: Finished motdgen.service. Feb 8 23:38:47.689597 tar[1407]: ./ Feb 8 23:38:47.689597 tar[1407]: ./macvlan Feb 8 23:38:47.695995 tar[1408]: crictl Feb 8 23:38:47.750603 extend-filesystems[1392]: Old size kept for /dev/sda9 Feb 8 23:38:47.753102 extend-filesystems[1392]: Found sr0 Feb 8 23:38:47.755084 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:38:47.755378 systemd[1]: Finished extend-filesystems.service. Feb 8 23:38:47.815873 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:38:47.817375 env[1432]: time="2024-02-08T23:38:47.817322434Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:38:47.832945 tar[1407]: ./static Feb 8 23:38:47.833473 bash[1454]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:38:47.833863 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:38:47.843548 dbus-daemon[1390]: [system] SELinux support is enabled Feb 8 23:38:47.843756 systemd[1]: Started dbus.service. Feb 8 23:38:47.848294 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:38:47.848333 systemd[1]: Reached target system-config.target. Feb 8 23:38:47.851258 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:38:47.851285 systemd[1]: Reached target user-config.target. Feb 8 23:38:47.858832 systemd[1]: Started systemd-logind.service. Feb 8 23:38:47.861478 dbus-daemon[1390]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:38:47.912186 tar[1407]: ./vlan Feb 8 23:38:47.940269 env[1432]: time="2024-02-08T23:38:47.940223038Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:38:47.940411 env[1432]: time="2024-02-08T23:38:47.940386949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.945007 env[1432]: time="2024-02-08T23:38:47.944968266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:47.945007 env[1432]: time="2024-02-08T23:38:47.945006269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946144 env[1432]: time="2024-02-08T23:38:47.945313890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946232 env[1432]: time="2024-02-08T23:38:47.946147748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946232 env[1432]: time="2024-02-08T23:38:47.946168749Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:38:47.946232 env[1432]: time="2024-02-08T23:38:47.946181750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946347 env[1432]: time="2024-02-08T23:38:47.946279957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946569 env[1432]: time="2024-02-08T23:38:47.946544775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946846 env[1432]: time="2024-02-08T23:38:47.946816494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:47.946915 env[1432]: time="2024-02-08T23:38:47.946849296Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:38:47.946959 env[1432]: time="2024-02-08T23:38:47.946917901Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:38:47.946959 env[1432]: time="2024-02-08T23:38:47.946934502Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:38:47.972953 env[1432]: time="2024-02-08T23:38:47.972918500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:38:47.973051 env[1432]: time="2024-02-08T23:38:47.972986405Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:38:47.973051 env[1432]: time="2024-02-08T23:38:47.973009406Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:38:47.973131 env[1432]: time="2024-02-08T23:38:47.973058910Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973131 env[1432]: time="2024-02-08T23:38:47.973078511Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973206 env[1432]: time="2024-02-08T23:38:47.973098712Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973206 env[1432]: time="2024-02-08T23:38:47.973168817Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973206 env[1432]: time="2024-02-08T23:38:47.973190519Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973312 env[1432]: time="2024-02-08T23:38:47.973222021Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973312 env[1432]: time="2024-02-08T23:38:47.973242622Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973312 env[1432]: time="2024-02-08T23:38:47.973259323Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.973312 env[1432]: time="2024-02-08T23:38:47.973278125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:38:47.973451 env[1432]: time="2024-02-08T23:38:47.973415434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:38:47.973565 env[1432]: time="2024-02-08T23:38:47.973543443Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:38:47.974158 env[1432]: time="2024-02-08T23:38:47.974132984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:38:47.974223 env[1432]: time="2024-02-08T23:38:47.974176587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974223 env[1432]: time="2024-02-08T23:38:47.974216490Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:38:47.974310 env[1432]: time="2024-02-08T23:38:47.974293895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974358 env[1432]: time="2024-02-08T23:38:47.974318497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974421 env[1432]: time="2024-02-08T23:38:47.974401302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974462 env[1432]: time="2024-02-08T23:38:47.974440305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974499 env[1432]: time="2024-02-08T23:38:47.974462307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974499 env[1432]: time="2024-02-08T23:38:47.974480508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974568 env[1432]: time="2024-02-08T23:38:47.974510910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974568 env[1432]: time="2024-02-08T23:38:47.974529211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974568 env[1432]: time="2024-02-08T23:38:47.974549213Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:38:47.974771 env[1432]: time="2024-02-08T23:38:47.974727925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974832 env[1432]: time="2024-02-08T23:38:47.974778529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974832 env[1432]: time="2024-02-08T23:38:47.974798130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.974909 env[1432]: time="2024-02-08T23:38:47.974815831Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:38:47.974909 env[1432]: time="2024-02-08T23:38:47.974851434Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:38:47.974909 env[1432]: time="2024-02-08T23:38:47.974868135Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:38:47.974909 env[1432]: time="2024-02-08T23:38:47.974890636Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:38:47.975051 env[1432]: time="2024-02-08T23:38:47.974944640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:38:47.975337 env[1432]: time="2024-02-08T23:38:47.975255662Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.975351868Z" level=info msg="Connect containerd service" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.975406472Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976153324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976287833Z" level=info msg="Start subscribing containerd event" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976341537Z" level=info msg="Start recovering state" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976405041Z" level=info msg="Start event monitor" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976424442Z" level=info msg="Start snapshots syncer" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976436143Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976445744Z" level=info msg="Start streaming server" Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976852972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:38:48.006804 env[1432]: time="2024-02-08T23:38:47.976954579Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:38:47.991424 systemd[1]: Started containerd.service. Feb 8 23:38:48.013936 env[1432]: time="2024-02-08T23:38:48.013904979Z" level=info msg="containerd successfully booted in 0.212021s" Feb 8 23:38:48.026573 tar[1407]: ./portmap Feb 8 23:38:48.102822 tar[1407]: ./host-local Feb 8 23:38:48.176232 tar[1407]: ./vrf Feb 8 23:38:48.249088 tar[1407]: ./bridge Feb 8 23:38:48.331679 tar[1407]: ./tuning Feb 8 23:38:48.403675 tar[1407]: ./firewall Feb 8 23:38:48.436415 sshd_keygen[1415]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:38:48.442719 update_engine[1404]: I0208 23:38:48.442173 1404 main.cc:92] Flatcar Update Engine starting Feb 8 23:38:48.463499 tar[1407]: ./host-device Feb 8 23:38:48.491186 systemd[1]: Started update-engine.service. Feb 8 23:38:48.495889 systemd[1]: Started locksmithd.service. Feb 8 23:38:48.498900 update_engine[1404]: I0208 23:38:48.498868 1404 update_check_scheduler.cc:74] Next update check in 10m42s Feb 8 23:38:48.505591 systemd[1]: Finished sshd-keygen.service. Feb 8 23:38:48.509500 systemd[1]: Starting issuegen.service... Feb 8 23:38:48.512811 systemd[1]: Started waagent.service. Feb 8 23:38:48.521759 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:38:48.522012 systemd[1]: Finished issuegen.service. Feb 8 23:38:48.525862 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:38:48.540571 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:38:48.545646 tar[1407]: ./sbr Feb 8 23:38:48.544384 systemd[1]: Started getty@tty1.service. Feb 8 23:38:48.547725 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:38:48.550128 systemd[1]: Reached target getty.target. Feb 8 23:38:48.601896 tar[1407]: ./loopback Feb 8 23:38:48.644372 systemd[1]: Finished prepare-critools.service. Feb 8 23:38:48.653640 tar[1407]: ./dhcp Feb 8 23:38:48.732371 tar[1407]: ./ptp Feb 8 23:38:48.764879 tar[1407]: ./ipvlan Feb 8 23:38:48.796579 tar[1407]: ./bandwidth Feb 8 23:38:48.889257 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:38:48.892136 systemd[1]: Reached target multi-user.target. Feb 8 23:38:48.896244 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:38:48.905999 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:38:48.906260 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:38:48.912200 systemd[1]: Startup finished in 935ms (firmware) + 27.090s (loader) + 21.863s (kernel) + 24.255s (userspace) = 1min 14.145s. Feb 8 23:38:49.263902 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:38:49.265025 login[1520]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:38:49.289288 systemd[1]: Created slice user-500.slice. Feb 8 23:38:49.291039 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:38:49.298238 systemd-logind[1403]: New session 2 of user core. Feb 8 23:38:49.301266 systemd-logind[1403]: New session 1 of user core. Feb 8 23:38:49.305808 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:38:49.308966 systemd[1]: Starting user@500.service... Feb 8 23:38:49.315287 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:38:49.472979 systemd[1537]: Queued start job for default target default.target. Feb 8 23:38:49.473300 systemd[1537]: Reached target paths.target. Feb 8 23:38:49.473328 systemd[1537]: Reached target sockets.target. Feb 8 23:38:49.473349 systemd[1537]: Reached target timers.target. Feb 8 23:38:49.473370 systemd[1537]: Reached target basic.target. Feb 8 23:38:49.473436 systemd[1537]: Reached target default.target. Feb 8 23:38:49.473480 systemd[1537]: Startup finished in 152ms. Feb 8 23:38:49.473553 systemd[1]: Started user@500.service. Feb 8 23:38:49.475220 systemd[1]: Started session-1.scope. Feb 8 23:38:49.476193 systemd[1]: Started session-2.scope. Feb 8 23:38:49.906274 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:38:53.974099 waagent[1512]: 2024-02-08T23:38:53.973985Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:38:53.978236 waagent[1512]: 2024-02-08T23:38:53.978162Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:38:53.987944 waagent[1512]: 2024-02-08T23:38:53.979424Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:38:53.987944 waagent[1512]: 2024-02-08T23:38:53.980662Z INFO Daemon Daemon Run daemon Feb 8 23:38:53.987944 waagent[1512]: 2024-02-08T23:38:53.981649Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:38:53.992559 waagent[1512]: 2024-02-08T23:38:53.992454Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:38:54.000074 waagent[1512]: 2024-02-08T23:38:53.999973Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:38:54.004667 waagent[1512]: 2024-02-08T23:38:54.004605Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:38:54.007293 waagent[1512]: 2024-02-08T23:38:54.007234Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:38:54.010319 waagent[1512]: 2024-02-08T23:38:54.010257Z INFO Daemon Daemon Activate resource disk Feb 8 23:38:54.012623 waagent[1512]: 2024-02-08T23:38:54.012564Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:38:54.022363 waagent[1512]: 2024-02-08T23:38:54.022298Z INFO Daemon Daemon Found device: None Feb 8 23:38:54.024671 waagent[1512]: 2024-02-08T23:38:54.024611Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:38:54.028561 waagent[1512]: 2024-02-08T23:38:54.028504Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:38:54.034349 waagent[1512]: 2024-02-08T23:38:54.034288Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:38:54.037370 waagent[1512]: 2024-02-08T23:38:54.037311Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:38:54.046082 waagent[1512]: 2024-02-08T23:38:54.045962Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:38:54.053441 waagent[1512]: 2024-02-08T23:38:54.053338Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:38:54.061430 waagent[1512]: 2024-02-08T23:38:54.054559Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:38:54.061430 waagent[1512]: 2024-02-08T23:38:54.055383Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:38:54.075433 waagent[1512]: 2024-02-08T23:38:54.075324Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:38:54.169136 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:38:54.174063 waagent[1512]: 2024-02-08T23:38:54.173947Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:38:54.176984 waagent[1512]: 2024-02-08T23:38:54.176914Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:38:54.179998 waagent[1512]: 2024-02-08T23:38:54.179937Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:38:54.183112 waagent[1512]: 2024-02-08T23:38:54.183055Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:38:54.185820 waagent[1512]: 2024-02-08T23:38:54.185762Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:38:54.188299 waagent[1512]: 2024-02-08T23:38:54.188241Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:38:54.298978 waagent[1512]: 2024-02-08T23:38:54.298817Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:38:54.303272 waagent[1512]: 2024-02-08T23:38:54.303226Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:38:54.306336 waagent[1512]: 2024-02-08T23:38:54.306276Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:38:54.629990 waagent[1512]: 2024-02-08T23:38:54.629785Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:38:54.639362 waagent[1512]: 2024-02-08T23:38:54.639283Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:38:54.644256 waagent[1512]: 2024-02-08T23:38:54.640607Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:38:54.717787 waagent[1512]: 2024-02-08T23:38:54.717640Z INFO Daemon Daemon Found private key matching thumbprint 2416ACF04E76212CE2025DC0C4DD077B3186760A Feb 8 23:38:54.728874 waagent[1512]: 2024-02-08T23:38:54.719228Z INFO Daemon Daemon Certificate with thumbprint 64880B94E5AD8AF343261AF9B5CDDD41ED668063 has no matching private key. Feb 8 23:38:54.728874 waagent[1512]: 2024-02-08T23:38:54.720318Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:38:54.735188 waagent[1512]: 2024-02-08T23:38:54.735135Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: b0f7a264-ecb5-4219-b5a2-61971e54ceb5 New eTag: 13302271492776848386] Feb 8 23:38:54.742856 waagent[1512]: 2024-02-08T23:38:54.736763Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:38:54.748174 waagent[1512]: 2024-02-08T23:38:54.748115Z INFO Daemon Daemon Starting provisioning Feb 8 23:38:54.754462 waagent[1512]: 2024-02-08T23:38:54.749278Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:38:54.754462 waagent[1512]: 2024-02-08T23:38:54.750069Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-df5d74ad8f] Feb 8 23:38:54.811233 waagent[1512]: 2024-02-08T23:38:54.811077Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-df5d74ad8f] Feb 8 23:38:54.819732 waagent[1512]: 2024-02-08T23:38:54.812897Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:38:54.819732 waagent[1512]: 2024-02-08T23:38:54.814379Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:38:54.828416 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:38:54.828732 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:38:54.828825 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:38:54.829123 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:38:54.834789 systemd-networkd[1261]: eth0: DHCPv6 lease lost Feb 8 23:38:54.836231 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:38:54.836544 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:38:54.839624 systemd[1]: Starting systemd-networkd.service... Feb 8 23:38:54.875601 systemd-networkd[1578]: enP18288s1: Link UP Feb 8 23:38:54.875610 systemd-networkd[1578]: enP18288s1: Gained carrier Feb 8 23:38:54.877061 systemd-networkd[1578]: eth0: Link UP Feb 8 23:38:54.877070 systemd-networkd[1578]: eth0: Gained carrier Feb 8 23:38:54.877495 systemd-networkd[1578]: lo: Link UP Feb 8 23:38:54.877504 systemd-networkd[1578]: lo: Gained carrier Feb 8 23:38:54.877894 systemd-networkd[1578]: eth0: Gained IPv6LL Feb 8 23:38:54.878174 systemd-networkd[1578]: Enumeration completed Feb 8 23:38:54.878308 systemd[1]: Started systemd-networkd.service. Feb 8 23:38:54.883965 waagent[1512]: 2024-02-08T23:38:54.880671Z INFO Daemon Daemon Create user account if not exists Feb 8 23:38:54.880334 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:38:54.888553 waagent[1512]: 2024-02-08T23:38:54.884658Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:38:54.888258 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:38:54.889114 waagent[1512]: 2024-02-08T23:38:54.889038Z INFO Daemon Daemon Configure sudoer Feb 8 23:38:54.892393 waagent[1512]: 2024-02-08T23:38:54.892331Z INFO Daemon Daemon Configure sshd Feb 8 23:38:54.895605 waagent[1512]: 2024-02-08T23:38:54.895535Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:38:54.926876 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:38:54.931208 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:38:54.939041 waagent[1512]: 2024-02-08T23:38:54.938929Z INFO Daemon Daemon Decode custom data Feb 8 23:38:54.942014 waagent[1512]: 2024-02-08T23:38:54.941947Z INFO Daemon Daemon Save custom data Feb 8 23:38:56.179161 waagent[1512]: 2024-02-08T23:38:56.179065Z INFO Daemon Daemon Provisioning complete Feb 8 23:38:56.192181 waagent[1512]: 2024-02-08T23:38:56.192107Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:38:56.199410 waagent[1512]: 2024-02-08T23:38:56.193477Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:38:56.199410 waagent[1512]: 2024-02-08T23:38:56.195181Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:38:56.459159 waagent[1588]: 2024-02-08T23:38:56.459070Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:38:56.459937 waagent[1588]: 2024-02-08T23:38:56.459875Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:56.460086 waagent[1588]: 2024-02-08T23:38:56.460029Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:56.470731 waagent[1588]: 2024-02-08T23:38:56.470660Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:38:56.470900 waagent[1588]: 2024-02-08T23:38:56.470845Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:38:56.529208 waagent[1588]: 2024-02-08T23:38:56.529089Z INFO ExtHandler ExtHandler Found private key matching thumbprint 2416ACF04E76212CE2025DC0C4DD077B3186760A Feb 8 23:38:56.529422 waagent[1588]: 2024-02-08T23:38:56.529359Z INFO ExtHandler ExtHandler Certificate with thumbprint 64880B94E5AD8AF343261AF9B5CDDD41ED668063 has no matching private key. Feb 8 23:38:56.529655 waagent[1588]: 2024-02-08T23:38:56.529605Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:38:56.543290 waagent[1588]: 2024-02-08T23:38:56.543231Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: c34ad5ac-7986-47f2-9c64-38aeeb1068e5 New eTag: 13302271492776848386] Feb 8 23:38:56.543839 waagent[1588]: 2024-02-08T23:38:56.543781Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:38:56.623992 waagent[1588]: 2024-02-08T23:38:56.623843Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:38:56.633328 waagent[1588]: 2024-02-08T23:38:56.633253Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1588 Feb 8 23:38:56.636672 waagent[1588]: 2024-02-08T23:38:56.636607Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:38:56.637935 waagent[1588]: 2024-02-08T23:38:56.637878Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:38:56.709436 waagent[1588]: 2024-02-08T23:38:56.709301Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:38:56.709843 waagent[1588]: 2024-02-08T23:38:56.709784Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:38:56.717598 waagent[1588]: 2024-02-08T23:38:56.717542Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:38:56.718085 waagent[1588]: 2024-02-08T23:38:56.718027Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:38:56.719164 waagent[1588]: 2024-02-08T23:38:56.719098Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:38:56.720423 waagent[1588]: 2024-02-08T23:38:56.720364Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:38:56.720848 waagent[1588]: 2024-02-08T23:38:56.720792Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:56.721176 waagent[1588]: 2024-02-08T23:38:56.721122Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:56.721678 waagent[1588]: 2024-02-08T23:38:56.721624Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:38:56.722000 waagent[1588]: 2024-02-08T23:38:56.721942Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:38:56.722000 waagent[1588]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:38:56.722000 waagent[1588]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:38:56.722000 waagent[1588]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:38:56.722000 waagent[1588]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:56.722000 waagent[1588]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:56.722000 waagent[1588]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:56.725113 waagent[1588]: 2024-02-08T23:38:56.724904Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:38:56.725991 waagent[1588]: 2024-02-08T23:38:56.725928Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:38:56.726135 waagent[1588]: 2024-02-08T23:38:56.726059Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:56.726520 waagent[1588]: 2024-02-08T23:38:56.726461Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:38:56.727228 waagent[1588]: 2024-02-08T23:38:56.727166Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:38:56.727428 waagent[1588]: 2024-02-08T23:38:56.727357Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:38:56.727667 waagent[1588]: 2024-02-08T23:38:56.727612Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:38:56.728133 waagent[1588]: 2024-02-08T23:38:56.728082Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:56.728800 waagent[1588]: 2024-02-08T23:38:56.728726Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:38:56.729291 waagent[1588]: 2024-02-08T23:38:56.729239Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:38:56.731215 waagent[1588]: 2024-02-08T23:38:56.731159Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:38:56.742018 waagent[1588]: 2024-02-08T23:38:56.741965Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:38:56.742672 waagent[1588]: 2024-02-08T23:38:56.742616Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:38:56.743484 waagent[1588]: 2024-02-08T23:38:56.743427Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:38:56.764897 waagent[1588]: 2024-02-08T23:38:56.764795Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1578' Feb 8 23:38:56.779638 waagent[1588]: 2024-02-08T23:38:56.779569Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:38:56.865582 waagent[1588]: 2024-02-08T23:38:56.865477Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:38:56.865582 waagent[1588]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:38:56.865582 waagent[1588]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:38:56.865582 waagent[1588]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:0d:66 brd ff:ff:ff:ff:ff:ff Feb 8 23:38:56.865582 waagent[1588]: 3: enP18288s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:0d:66 brd ff:ff:ff:ff:ff:ff\ altname enP18288p0s2 Feb 8 23:38:56.865582 waagent[1588]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:38:56.865582 waagent[1588]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:38:56.865582 waagent[1588]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:38:56.865582 waagent[1588]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:38:56.865582 waagent[1588]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:38:56.865582 waagent[1588]: 2: eth0 inet6 fe80::222:48ff:fea0:d66/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:38:57.109099 waagent[1588]: 2024-02-08T23:38:57.108922Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 8 23:38:57.112110 waagent[1588]: 2024-02-08T23:38:57.112004Z INFO EnvHandler ExtHandler Firewall rules: Feb 8 23:38:57.112110 waagent[1588]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:57.112110 waagent[1588]: pkts bytes target prot opt in out source destination Feb 8 23:38:57.112110 waagent[1588]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:57.112110 waagent[1588]: pkts bytes target prot opt in out source destination Feb 8 23:38:57.112110 waagent[1588]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:57.112110 waagent[1588]: pkts bytes target prot opt in out source destination Feb 8 23:38:57.112110 waagent[1588]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:38:57.112110 waagent[1588]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:38:57.113452 waagent[1588]: 2024-02-08T23:38:57.113397Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:38:57.127061 waagent[1588]: 2024-02-08T23:38:57.127000Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:38:57.199007 waagent[1512]: 2024-02-08T23:38:57.198857Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:38:57.205286 waagent[1512]: 2024-02-08T23:38:57.205211Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:38:58.271374 waagent[1627]: 2024-02-08T23:38:58.271261Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:38:58.272091 waagent[1627]: 2024-02-08T23:38:58.272027Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:38:58.272238 waagent[1627]: 2024-02-08T23:38:58.272184Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:38:58.281515 waagent[1627]: 2024-02-08T23:38:58.281411Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:38:58.281911 waagent[1627]: 2024-02-08T23:38:58.281854Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:58.282077 waagent[1627]: 2024-02-08T23:38:58.282026Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:58.293224 waagent[1627]: 2024-02-08T23:38:58.293150Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:38:58.301291 waagent[1627]: 2024-02-08T23:38:58.301232Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:38:58.302193 waagent[1627]: 2024-02-08T23:38:58.302133Z INFO ExtHandler Feb 8 23:38:58.302345 waagent[1627]: 2024-02-08T23:38:58.302294Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0a78117f-4d01-4f0a-a2b8-ab141902297b eTag: 13302271492776848386 source: Fabric] Feb 8 23:38:58.303058 waagent[1627]: 2024-02-08T23:38:58.302999Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:38:58.304132 waagent[1627]: 2024-02-08T23:38:58.304071Z INFO ExtHandler Feb 8 23:38:58.304269 waagent[1627]: 2024-02-08T23:38:58.304218Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:38:58.310674 waagent[1627]: 2024-02-08T23:38:58.310623Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:38:58.311105 waagent[1627]: 2024-02-08T23:38:58.311056Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:38:58.329546 waagent[1627]: 2024-02-08T23:38:58.329487Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:38:58.392649 waagent[1627]: 2024-02-08T23:38:58.392509Z INFO ExtHandler Downloaded certificate {'thumbprint': '64880B94E5AD8AF343261AF9B5CDDD41ED668063', 'hasPrivateKey': False} Feb 8 23:38:58.393619 waagent[1627]: 2024-02-08T23:38:58.393553Z INFO ExtHandler Downloaded certificate {'thumbprint': '2416ACF04E76212CE2025DC0C4DD077B3186760A', 'hasPrivateKey': True} Feb 8 23:38:58.394586 waagent[1627]: 2024-02-08T23:38:58.394524Z INFO ExtHandler Fetch goal state completed Feb 8 23:38:58.413182 waagent[1627]: 2024-02-08T23:38:58.413107Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1627 Feb 8 23:38:58.416421 waagent[1627]: 2024-02-08T23:38:58.416358Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:38:58.417885 waagent[1627]: 2024-02-08T23:38:58.417828Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:38:58.422837 waagent[1627]: 2024-02-08T23:38:58.422781Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:38:58.423198 waagent[1627]: 2024-02-08T23:38:58.423141Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:38:58.431174 waagent[1627]: 2024-02-08T23:38:58.431119Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:38:58.431612 waagent[1627]: 2024-02-08T23:38:58.431556Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:38:58.455088 waagent[1627]: 2024-02-08T23:38:58.454972Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 8 23:38:58.457908 waagent[1627]: 2024-02-08T23:38:58.457804Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 8 23:38:58.462610 waagent[1627]: 2024-02-08T23:38:58.462548Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:38:58.464038 waagent[1627]: 2024-02-08T23:38:58.463977Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:38:58.464504 waagent[1627]: 2024-02-08T23:38:58.464447Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:58.464663 waagent[1627]: 2024-02-08T23:38:58.464612Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:58.465215 waagent[1627]: 2024-02-08T23:38:58.465155Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:38:58.465493 waagent[1627]: 2024-02-08T23:38:58.465438Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:38:58.465493 waagent[1627]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:38:58.465493 waagent[1627]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:38:58.465493 waagent[1627]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:38:58.465493 waagent[1627]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:58.465493 waagent[1627]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:58.465493 waagent[1627]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:58.468400 waagent[1627]: 2024-02-08T23:38:58.468218Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:38:58.469348 waagent[1627]: 2024-02-08T23:38:58.469288Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:58.469513 waagent[1627]: 2024-02-08T23:38:58.469462Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:58.469972 waagent[1627]: 2024-02-08T23:38:58.469905Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:38:58.470133 waagent[1627]: 2024-02-08T23:38:58.470083Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:38:58.470449 waagent[1627]: 2024-02-08T23:38:58.470391Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:38:58.470655 waagent[1627]: 2024-02-08T23:38:58.470604Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:38:58.470749 waagent[1627]: 2024-02-08T23:38:58.470684Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:38:58.477811 waagent[1627]: 2024-02-08T23:38:58.477530Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:38:58.478156 waagent[1627]: 2024-02-08T23:38:58.478076Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:38:58.489695 waagent[1627]: 2024-02-08T23:38:58.483300Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:38:58.494849 waagent[1627]: 2024-02-08T23:38:58.494771Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:38:58.494849 waagent[1627]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:38:58.494849 waagent[1627]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:38:58.494849 waagent[1627]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:0d:66 brd ff:ff:ff:ff:ff:ff Feb 8 23:38:58.494849 waagent[1627]: 3: enP18288s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:0d:66 brd ff:ff:ff:ff:ff:ff\ altname enP18288p0s2 Feb 8 23:38:58.494849 waagent[1627]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:38:58.494849 waagent[1627]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:38:58.494849 waagent[1627]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:38:58.494849 waagent[1627]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:38:58.494849 waagent[1627]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:38:58.494849 waagent[1627]: 2: eth0 inet6 fe80::222:48ff:fea0:d66/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:38:58.501249 waagent[1627]: 2024-02-08T23:38:58.501163Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:38:58.503072 waagent[1627]: 2024-02-08T23:38:58.503015Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:38:58.533271 waagent[1627]: 2024-02-08T23:38:58.533173Z INFO ExtHandler ExtHandler Feb 8 23:38:58.534230 waagent[1627]: 2024-02-08T23:38:58.534169Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: cd886132-fe98-475d-937f-93fe9f23ec68 correlation 4143b493-8168-42d6-bd2e-99505d32b814 created: 2024-02-08T23:37:24.252686Z] Feb 8 23:38:58.535415 waagent[1627]: 2024-02-08T23:38:58.535358Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:38:58.537181 waagent[1627]: 2024-02-08T23:38:58.537124Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 8 23:38:58.558720 waagent[1627]: 2024-02-08T23:38:58.558656Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:38:58.584789 waagent[1627]: 2024-02-08T23:38:58.583946Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D91EC61B-05FC-4E46-BEAE-D6446687BE6F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:38:58.594936 waagent[1627]: 2024-02-08T23:38:58.594861Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:38:58.594936 waagent[1627]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:58.594936 waagent[1627]: pkts bytes target prot opt in out source destination Feb 8 23:38:58.594936 waagent[1627]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:58.594936 waagent[1627]: pkts bytes target prot opt in out source destination Feb 8 23:38:58.594936 waagent[1627]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:58.594936 waagent[1627]: pkts bytes target prot opt in out source destination Feb 8 23:38:58.594936 waagent[1627]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:38:58.594936 waagent[1627]: 12 1347 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:38:58.594936 waagent[1627]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:39:26.025921 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:39:32.706796 systemd[1]: Created slice system-sshd.slice. Feb 8 23:39:32.708781 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.12.6:57200.service. Feb 8 23:39:33.514206 sshd[1668]: Accepted publickey for core from 10.200.12.6 port 57200 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:33.515887 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:33.521204 systemd-logind[1403]: New session 3 of user core. Feb 8 23:39:33.521914 systemd[1]: Started session-3.scope. Feb 8 23:39:33.793496 update_engine[1404]: I0208 23:39:33.793349 1404 update_attempter.cc:509] Updating boot flags... Feb 8 23:39:34.046917 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.12.6:57208.service. Feb 8 23:39:34.678856 sshd[1712]: Accepted publickey for core from 10.200.12.6 port 57208 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:34.680440 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:34.686207 systemd[1]: Started session-4.scope. Feb 8 23:39:34.686455 systemd-logind[1403]: New session 4 of user core. Feb 8 23:39:35.115467 sshd[1712]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:35.118768 systemd[1]: sshd@1-10.200.8.17:22-10.200.12.6:57208.service: Deactivated successfully. Feb 8 23:39:35.120162 systemd-logind[1403]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:39:35.120280 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:39:35.121823 systemd-logind[1403]: Removed session 4. Feb 8 23:39:35.218275 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.12.6:57210.service. Feb 8 23:39:35.837533 sshd[1719]: Accepted publickey for core from 10.200.12.6 port 57210 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:35.839149 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:35.843876 systemd[1]: Started session-5.scope. Feb 8 23:39:35.844121 systemd-logind[1403]: New session 5 of user core. Feb 8 23:39:36.271982 sshd[1719]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:36.275237 systemd[1]: sshd@2-10.200.8.17:22-10.200.12.6:57210.service: Deactivated successfully. Feb 8 23:39:36.276625 systemd-logind[1403]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:39:36.276768 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:39:36.278331 systemd-logind[1403]: Removed session 5. Feb 8 23:39:36.373792 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.12.6:57214.service. Feb 8 23:39:36.990106 sshd[1726]: Accepted publickey for core from 10.200.12.6 port 57214 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:36.991685 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:36.997476 systemd[1]: Started session-6.scope. Feb 8 23:39:36.997795 systemd-logind[1403]: New session 6 of user core. Feb 8 23:39:37.426489 sshd[1726]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:37.429546 systemd[1]: sshd@3-10.200.8.17:22-10.200.12.6:57214.service: Deactivated successfully. Feb 8 23:39:37.430591 systemd-logind[1403]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:39:37.430682 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:39:37.432158 systemd-logind[1403]: Removed session 6. Feb 8 23:39:37.530111 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.12.6:51314.service. Feb 8 23:39:38.148933 sshd[1733]: Accepted publickey for core from 10.200.12.6 port 51314 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:38.150544 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:38.155399 systemd[1]: Started session-7.scope. Feb 8 23:39:38.155679 systemd-logind[1403]: New session 7 of user core. Feb 8 23:39:38.723721 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 8 23:39:38.724077 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:39:38.732940 dbus-daemon[1390]: Ѝ\x93\xe5\x9fU: received setenforce notice (enforcing=343203184) Feb 8 23:39:38.734679 sudo[1740]: pam_unix(sudo:session): session closed for user root Feb 8 23:39:38.850866 sshd[1733]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:38.854513 systemd[1]: sshd@4-10.200.8.17:22-10.200.12.6:51314.service: Deactivated successfully. Feb 8 23:39:38.856180 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:39:38.856984 systemd-logind[1403]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:39:38.858145 systemd-logind[1403]: Removed session 7. Feb 8 23:39:38.953429 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.12.6:51328.service. Feb 8 23:39:39.575354 sshd[1744]: Accepted publickey for core from 10.200.12.6 port 51328 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:39.576965 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:39.581767 systemd[1]: Started session-8.scope. Feb 8 23:39:39.582017 systemd-logind[1403]: New session 8 of user core. Feb 8 23:39:39.915708 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 8 23:39:39.916220 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:39:39.918925 sudo[1749]: pam_unix(sudo:session): session closed for user root Feb 8 23:39:39.923091 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 8 23:39:39.923344 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:39:39.931994 systemd[1]: Stopping audit-rules.service... Feb 8 23:39:39.932000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:39:39.936877 auditctl[1752]: No rules Feb 8 23:39:39.937310 systemd[1]: audit-rules.service: Deactivated successfully. Feb 8 23:39:39.937519 systemd[1]: Stopped audit-rules.service. Feb 8 23:39:39.939470 systemd[1]: Starting audit-rules.service... Feb 8 23:39:39.941761 kernel: audit: type=1305 audit(1707435579.932:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:39:39.932000 audit[1752]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9ff0e9b0 a2=420 a3=0 items=0 ppid=1 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:39.932000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:39:39.964908 kernel: audit: type=1300 audit(1707435579.932:137): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9ff0e9b0 a2=420 a3=0 items=0 ppid=1 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:39.964966 kernel: audit: type=1327 audit(1707435579.932:137): proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:39:39.964993 kernel: audit: type=1131 audit(1707435579.936:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.965867 augenrules[1770]: No rules Feb 8 23:39:39.966651 systemd[1]: Finished audit-rules.service. Feb 8 23:39:39.972384 sudo[1748]: pam_unix(sudo:session): session closed for user root Feb 8 23:39:39.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.975752 kernel: audit: type=1130 audit(1707435579.964:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.970000 audit[1748]: USER_END pid=1748 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.985758 kernel: audit: type=1106 audit(1707435579.970:140): pid=1748 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:39.970000 audit[1748]: CRED_DISP pid=1748 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:40.006341 kernel: audit: type=1104 audit(1707435579.970:141): pid=1748 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:40.073137 sshd[1744]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:40.073000 audit[1744]: USER_END pid=1744 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.076654 systemd[1]: sshd@5-10.200.8.17:22-10.200.12.6:51328.service: Deactivated successfully. Feb 8 23:39:40.077587 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:39:40.084309 systemd-logind[1403]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:39:40.085177 systemd-logind[1403]: Removed session 8. Feb 8 23:39:40.074000 audit[1744]: CRED_DISP pid=1744 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.100878 kernel: audit: type=1106 audit(1707435580.073:142): pid=1744 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.100937 kernel: audit: type=1104 audit(1707435580.074:143): pid=1744 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.100963 kernel: audit: type=1131 audit(1707435580.074:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.17:22-10.200.12.6:51328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:40.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.17:22-10.200.12.6:51328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:40.176199 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.12.6:51332.service. Feb 8 23:39:40.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.12.6:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:40.790000 audit[1777]: USER_ACCT pid=1777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.791071 sshd[1777]: Accepted publickey for core from 10.200.12.6 port 51332 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:40.791000 audit[1777]: CRED_ACQ pid=1777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.791000 audit[1777]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1d86b530 a2=3 a3=0 items=0 ppid=1 pid=1777 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:40.791000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:39:40.792695 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:40.798009 systemd[1]: Started session-9.scope. Feb 8 23:39:40.798249 systemd-logind[1403]: New session 9 of user core. Feb 8 23:39:40.802000 audit[1777]: USER_START pid=1777 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:40.804000 audit[1780]: CRED_ACQ pid=1780 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:41.129000 audit[1781]: USER_ACCT pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:41.130000 audit[1781]: CRED_REFR pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:41.130815 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:39:41.131140 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:39:41.132000 audit[1781]: USER_START pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:41.815312 systemd[1]: Reloading. Feb 8 23:39:41.920309 /usr/lib/systemd/system-generators/torcx-generator[1810]: time="2024-02-08T23:39:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:39:41.920345 /usr/lib/systemd/system-generators/torcx-generator[1810]: time="2024-02-08T23:39:41Z" level=info msg="torcx already run" Feb 8 23:39:41.992976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:39:41.992995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:39:42.009163 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:39:42.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:42.091817 systemd[1]: Started kubelet.service. Feb 8 23:39:42.121104 systemd[1]: Starting coreos-metadata.service... Feb 8 23:39:42.171614 kubelet[1878]: E0208 23:39:42.171549 1878 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:39:42.174207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:39:42.174418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:39:42.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:39:42.185009 coreos-metadata[1885]: Feb 08 23:39:42.184 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:39:42.187614 coreos-metadata[1885]: Feb 08 23:39:42.187 INFO Fetch successful Feb 8 23:39:42.187795 coreos-metadata[1885]: Feb 08 23:39:42.187 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 8 23:39:42.189260 coreos-metadata[1885]: Feb 08 23:39:42.189 INFO Fetch successful Feb 8 23:39:42.189606 coreos-metadata[1885]: Feb 08 23:39:42.189 INFO Fetching http://168.63.129.16/machine/0b1ab88f-8de9-4c12-9c00-53fc007c957d/f3961ec6%2D2d1c%2D4f0d%2Db9c2%2D81312b0a4c87.%5Fci%2D3510.3.2%2Da%2Ddf5d74ad8f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 8 23:39:42.191181 coreos-metadata[1885]: Feb 08 23:39:42.191 INFO Fetch successful Feb 8 23:39:42.223233 coreos-metadata[1885]: Feb 08 23:39:42.223 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:39:42.232469 coreos-metadata[1885]: Feb 08 23:39:42.232 INFO Fetch successful Feb 8 23:39:42.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:42.245094 systemd[1]: Finished coreos-metadata.service. Feb 8 23:39:45.431867 systemd[1]: Stopped kubelet.service. Feb 8 23:39:45.435361 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 8 23:39:45.435449 kernel: audit: type=1130 audit(1707435585.431:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.454795 kernel: audit: type=1131 audit(1707435585.435:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.458780 systemd[1]: Reloading. Feb 8 23:39:45.531776 /usr/lib/systemd/system-generators/torcx-generator[1947]: time="2024-02-08T23:39:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:39:45.531812 /usr/lib/systemd/system-generators/torcx-generator[1947]: time="2024-02-08T23:39:45Z" level=info msg="torcx already run" Feb 8 23:39:45.628671 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:39:45.628691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:39:45.644685 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:39:45.733636 systemd[1]: Started kubelet.service. Feb 8 23:39:45.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.748765 kernel: audit: type=1130 audit(1707435585.733:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:45.783899 kubelet[2016]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:39:45.783899 kubelet[2016]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:39:45.784368 kubelet[2016]: I0208 23:39:45.783945 2016 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:39:45.785260 kubelet[2016]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:39:45.785260 kubelet[2016]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:39:46.104940 kubelet[2016]: I0208 23:39:46.104370 2016 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:39:46.104940 kubelet[2016]: I0208 23:39:46.104396 2016 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:39:46.104940 kubelet[2016]: I0208 23:39:46.104654 2016 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:39:46.106870 kubelet[2016]: I0208 23:39:46.106851 2016 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:39:46.110187 kubelet[2016]: I0208 23:39:46.110166 2016 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:39:46.110531 kubelet[2016]: I0208 23:39:46.110509 2016 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:39:46.110604 kubelet[2016]: I0208 23:39:46.110590 2016 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:39:46.110817 kubelet[2016]: I0208 23:39:46.110617 2016 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:39:46.110817 kubelet[2016]: I0208 23:39:46.110633 2016 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:39:46.110817 kubelet[2016]: I0208 23:39:46.110753 2016 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:39:46.114766 kubelet[2016]: I0208 23:39:46.114736 2016 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:39:46.114902 kubelet[2016]: I0208 23:39:46.114890 2016 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:39:46.114995 kubelet[2016]: I0208 23:39:46.114985 2016 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:39:46.115064 kubelet[2016]: I0208 23:39:46.115055 2016 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:39:46.115272 kubelet[2016]: E0208 23:39:46.115189 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:46.115272 kubelet[2016]: E0208 23:39:46.115225 2016 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:46.116001 kubelet[2016]: I0208 23:39:46.115980 2016 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:39:46.116382 kubelet[2016]: W0208 23:39:46.116364 2016 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:39:46.116908 kubelet[2016]: I0208 23:39:46.116890 2016 server.go:1186] "Started kubelet" Feb 8 23:39:46.117163 kubelet[2016]: I0208 23:39:46.117148 2016 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:39:46.118879 kubelet[2016]: I0208 23:39:46.118862 2016 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:39:46.121746 kubelet[2016]: E0208 23:39:46.121717 2016 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:39:46.121831 kubelet[2016]: E0208 23:39:46.121758 2016 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:39:46.121000 audit[2016]: AVC avc: denied { mac_admin } for pid=2016 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:39:46.136400 kubelet[2016]: I0208 23:39:46.129813 2016 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:39:46.136400 kubelet[2016]: I0208 23:39:46.129856 2016 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:39:46.136400 kubelet[2016]: I0208 23:39:46.129939 2016 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:39:46.136400 kubelet[2016]: I0208 23:39:46.133008 2016 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:39:46.136400 kubelet[2016]: I0208 23:39:46.133088 2016 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:39:46.136756 kernel: audit: type=1400 audit(1707435586.121:160): avc: denied { mac_admin } for pid=2016 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:39:46.121000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:39:46.165032 kernel: audit: type=1401 audit(1707435586.121:160): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:39:46.165114 kernel: audit: type=1300 audit(1707435586.121:160): arch=c000003e syscall=188 success=no exit=-22 a0=c000f28900 a1=c000dca8b8 a2=c000f288d0 a3=25 items=0 ppid=1 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.121000 audit[2016]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f28900 a1=c000dca8b8 a2=c000f288d0 a3=25 items=0 ppid=1 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.121000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:39:46.181507 kernel: audit: type=1327 audit(1707435586.121:160): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:39:46.189765 kernel: audit: type=1400 audit(1707435586.126:161): avc: denied { mac_admin } for pid=2016 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:39:46.126000 audit[2016]: AVC avc: denied { mac_admin } for pid=2016 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:39:46.189908 kubelet[2016]: W0208 23:39:46.182324 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:46.189908 kubelet[2016]: E0208 23:39:46.182352 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:46.189908 kubelet[2016]: W0208 23:39:46.182378 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:46.189908 kubelet[2016]: E0208 23:39:46.182388 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:46.190050 kubelet[2016]: E0208 23:39:46.182406 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e835b487b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 116864123, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 116864123, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.190050 kubelet[2016]: E0208 23:39:46.182637 2016 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:46.190050 kubelet[2016]: W0208 23:39:46.182677 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:46.190193 kubelet[2016]: E0208 23:39:46.183461 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:46.190193 kubelet[2016]: E0208 23:39:46.183719 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e83a59b2c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 121734956, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 121734956, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.193281 kubelet[2016]: I0208 23:39:46.193263 2016 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:39:46.193405 kubelet[2016]: I0208 23:39:46.193396 2016 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:39:46.193504 kubelet[2016]: I0208 23:39:46.193495 2016 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:39:46.194040 kubelet[2016]: E0208 23:39:46.193881 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.126000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:39:46.194944 kubelet[2016]: E0208 23:39:46.194887 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.195636 kubelet[2016]: E0208 23:39:46.195590 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.201305 kernel: audit: type=1401 audit(1707435586.126:161): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:39:46.221544 kernel: audit: type=1300 audit(1707435586.126:161): arch=c000003e syscall=188 success=no exit=-22 a0=c000b7a000 a1=c000dca000 a2=c000f28990 a3=25 items=0 ppid=1 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.126000 audit[2016]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b7a000 a1=c000dca000 a2=c000f28990 a3=25 items=0 ppid=1 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.224765 kubelet[2016]: I0208 23:39:46.222925 2016 policy_none.go:49] "None policy: Start" Feb 8 23:39:46.224765 kubelet[2016]: I0208 23:39:46.224016 2016 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:39:46.224765 kubelet[2016]: I0208 23:39:46.224064 2016 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:39:46.126000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:39:46.150000 audit[2027]: NETFILTER_CFG table=mangle:6 family=2 entries=2 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.150000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff02e60e80 a2=0 a3=7fff02e60e6c items=0 ppid=2016 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:39:46.160000 audit[2028]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.160000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd4d1b4c00 a2=0 a3=7ffd4d1b4bec items=0 ppid=2016 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:39:46.164000 audit[2030]: NETFILTER_CFG table=filter:8 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.164000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef10eeab0 a2=0 a3=7ffef10eea9c items=0 ppid=2016 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:39:46.164000 audit[2032]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.164000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc4a1e6d20 a2=0 a3=7ffc4a1e6d0c items=0 ppid=2016 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:39:46.225000 audit[2037]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.225000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd49e90c10 a2=0 a3=7ffd49e90bfc items=0 ppid=2016 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 8 23:39:46.227000 audit[2039]: NETFILTER_CFG table=nat:11 family=2 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.227000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffe91637e0 a2=0 a3=7fffe91637cc items=0 ppid=2016 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:39:46.233729 kubelet[2016]: I0208 23:39:46.233716 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:46.235021 kubelet[2016]: I0208 23:39:46.235000 2016 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:39:46.234000 audit[2016]: AVC avc: denied { mac_admin } for pid=2016 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:39:46.234000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:39:46.234000 audit[2016]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00118b890 a1=c0011a81e0 a2=c00118b860 a3=25 items=0 ppid=1 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.234000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:39:46.235327 kubelet[2016]: I0208 23:39:46.235108 2016 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:39:46.235327 kubelet[2016]: I0208 23:39:46.235284 2016 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:39:46.235782 kubelet[2016]: E0208 23:39:46.235003 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:46.236729 kubelet[2016]: E0208 23:39:46.236651 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 233686624, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.237161 kubelet[2016]: E0208 23:39:46.237148 2016 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.17\" not found" Feb 8 23:39:46.237727 kubelet[2016]: E0208 23:39:46.237657 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 233690724, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.238968 kubelet[2016]: E0208 23:39:46.238902 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 233693524, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.239863 kubelet[2016]: E0208 23:39:46.239810 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e8a6fd725", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 235651877, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 235651877, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.264000 audit[2043]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.264000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdec24ede0 a2=0 a3=7ffdec24edcc items=0 ppid=2016 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:39:46.292000 audit[2046]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.292000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffefda13450 a2=0 a3=7ffefda1343c items=0 ppid=2016 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:39:46.293000 audit[2047]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.293000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe64279a00 a2=0 a3=7ffe642799ec items=0 ppid=2016 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:39:46.295000 audit[2048]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.295000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0ef56f60 a2=0 a3=7ffd0ef56f4c items=0 ppid=2016 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:39:46.298000 audit[2050]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.298000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeedf84a40 a2=0 a3=7ffeedf84a2c items=0 ppid=2016 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:39:46.300000 audit[2052]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.300000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffecf5fa340 a2=0 a3=7ffecf5fa32c items=0 ppid=2016 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:39:46.349000 audit[2055]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.349000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffeafe5e3a0 a2=0 a3=7ffeafe5e38c items=0 ppid=2016 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:39:46.351000 audit[2057]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.351000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc33009ce0 a2=0 a3=7ffc33009ccc items=0 ppid=2016 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:39:46.384245 kubelet[2016]: E0208 23:39:46.384119 2016 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:46.394000 audit[2060]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.394000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe273cf040 a2=0 a3=7ffe273cf02c items=0 ppid=2016 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:39:46.396157 kubelet[2016]: I0208 23:39:46.396132 2016 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:39:46.396000 audit[2061]: NETFILTER_CFG table=mangle:21 family=10 entries=2 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.396000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe298f8990 a2=0 a3=7ffe298f897c items=0 ppid=2016 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.396000 audit[2062]: NETFILTER_CFG table=mangle:22 family=2 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.396000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdefc82cd0 a2=0 a3=7ffdefc82cbc items=0 ppid=2016 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:39:46.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:39:46.398000 audit[2064]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.398000 audit[2063]: NETFILTER_CFG table=nat:24 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.398000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd1888cd10 a2=0 a3=7ffd1888ccfc items=0 ppid=2016 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.398000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3c2de400 a2=0 a3=7ffc3c2de3ec items=0 ppid=2016 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:39:46.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:39:46.399000 audit[2065]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:39:46.399000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd93beeb20 a2=0 a3=7ffd93beeb0c items=0 ppid=2016 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:39:46.401000 audit[2067]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.401000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe8441f3a0 a2=0 a3=7ffe8441f38c items=0 ppid=2016 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:39:46.402000 audit[2068]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.402000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff0f00b6e0 a2=0 a3=7fff0f00b6cc items=0 ppid=2016 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:39:46.404000 audit[2070]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.404000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff58efbe40 a2=0 a3=7fff58efbe2c items=0 ppid=2016 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:39:46.405000 audit[2071]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.405000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe742224c0 a2=0 a3=7ffe742224ac items=0 ppid=2016 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:39:46.406000 audit[2072]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.406000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddccf3240 a2=0 a3=7ffddccf322c items=0 ppid=2016 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:39:46.408000 audit[2074]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.408000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe1995a6f0 a2=0 a3=7ffe1995a6dc items=0 ppid=2016 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:39:46.410000 audit[2076]: NETFILTER_CFG table=nat:32 family=10 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.410000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffee960120 a2=0 a3=7fffee96010c items=0 ppid=2016 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:39:46.412000 audit[2078]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.412000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff86450c90 a2=0 a3=7fff86450c7c items=0 ppid=2016 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:39:46.414000 audit[2080]: NETFILTER_CFG table=nat:34 family=10 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.414000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffff1108fb0 a2=0 a3=7ffff1108f9c items=0 ppid=2016 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:39:46.434000 audit[2082]: NETFILTER_CFG table=nat:35 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.434000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffda9601170 a2=0 a3=7ffda960115c items=0 ppid=2016 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:39:46.435655 kubelet[2016]: I0208 23:39:46.435626 2016 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:39:46.435769 kubelet[2016]: I0208 23:39:46.435659 2016 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:39:46.435769 kubelet[2016]: I0208 23:39:46.435681 2016 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:39:46.435769 kubelet[2016]: E0208 23:39:46.435726 2016 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:39:46.437053 kubelet[2016]: I0208 23:39:46.437027 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:46.436000 audit[2083]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.436000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0d3e9d90 a2=0 a3=7ffc0d3e9d7c items=0 ppid=2016 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:39:46.437879 kubelet[2016]: W0208 23:39:46.437857 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:46.437952 kubelet[2016]: E0208 23:39:46.437898 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:46.438008 kubelet[2016]: E0208 23:39:46.437957 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:46.438337 kubelet[2016]: E0208 23:39:46.438234 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 436981894, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.439072 kubelet[2016]: E0208 23:39:46.438996 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 436990194, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.438000 audit[2084]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.438000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9600c990 a2=0 a3=7fff9600c97c items=0 ppid=2016 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:39:46.440000 audit[2085]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:39:46.440000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff58398c90 a2=0 a3=7fff58398c7c items=0 ppid=2016 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:46.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:39:46.518408 kubelet[2016]: E0208 23:39:46.518312 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 436993994, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.786129 kubelet[2016]: E0208 23:39:46.786086 2016 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:46.839366 kubelet[2016]: I0208 23:39:46.839337 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:46.840690 kubelet[2016]: E0208 23:39:46.840660 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:46.840872 kubelet[2016]: E0208 23:39:46.840602 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 839286017, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:46.918489 kubelet[2016]: E0208 23:39:46.918391 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 839302117, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:47.116082 kubelet[2016]: E0208 23:39:47.115948 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:47.119152 kubelet[2016]: E0208 23:39:47.119062 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 839307217, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:47.150791 kubelet[2016]: W0208 23:39:47.150759 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:47.150791 kubelet[2016]: E0208 23:39:47.150795 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:47.361544 kubelet[2016]: W0208 23:39:47.361504 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:47.361544 kubelet[2016]: E0208 23:39:47.361545 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:47.380880 kubelet[2016]: W0208 23:39:47.380790 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:47.381057 kubelet[2016]: E0208 23:39:47.381041 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:47.588166 kubelet[2016]: E0208 23:39:47.588112 2016 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:47.641773 kubelet[2016]: I0208 23:39:47.641630 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:47.643168 kubelet[2016]: E0208 23:39:47.643136 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:47.643350 kubelet[2016]: E0208 23:39:47.643126 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 47, 641565116, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:47.644317 kubelet[2016]: E0208 23:39:47.644242 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 47, 641576416, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:47.718999 kubelet[2016]: E0208 23:39:47.718893 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 47, 641581716, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:47.928297 kubelet[2016]: W0208 23:39:47.928171 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:47.928297 kubelet[2016]: E0208 23:39:47.928211 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:48.116603 kubelet[2016]: E0208 23:39:48.116552 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:49.117057 kubelet[2016]: E0208 23:39:49.117006 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:49.189814 kubelet[2016]: E0208 23:39:49.189768 2016 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:49.242866 kubelet[2016]: W0208 23:39:49.242823 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:49.242866 kubelet[2016]: E0208 23:39:49.242867 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:49.244642 kubelet[2016]: I0208 23:39:49.244608 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:49.245520 kubelet[2016]: E0208 23:39:49.245486 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:49.245779 kubelet[2016]: E0208 23:39:49.245681 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 49, 244562343, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:49.246616 kubelet[2016]: E0208 23:39:49.246546 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 49, 244574643, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:49.247402 kubelet[2016]: E0208 23:39:49.247346 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 49, 244579443, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:49.999137 kubelet[2016]: W0208 23:39:49.999094 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:49.999137 kubelet[2016]: E0208 23:39:49.999136 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:50.071021 kubelet[2016]: W0208 23:39:50.070981 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:50.071021 kubelet[2016]: E0208 23:39:50.071019 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:50.117604 kubelet[2016]: E0208 23:39:50.117549 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:50.290585 kubelet[2016]: W0208 23:39:50.290456 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:50.290585 kubelet[2016]: E0208 23:39:50.290500 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:51.117919 kubelet[2016]: E0208 23:39:51.117840 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:52.118862 kubelet[2016]: E0208 23:39:52.118808 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:52.391442 kubelet[2016]: E0208 23:39:52.391317 2016 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.17" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:39:52.446593 kubelet[2016]: I0208 23:39:52.446554 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:52.448103 kubelet[2016]: E0208 23:39:52.447730 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da0b45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.17 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192280389, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 52, 446507465, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da0b45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:52.448103 kubelet[2016]: E0208 23:39:52.447825 2016 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.17" Feb 8 23:39:52.448819 kubelet[2016]: E0208 23:39:52.448732 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da5e16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.17 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192301590, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 52, 446521065, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da5e16" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:52.449680 kubelet[2016]: E0208 23:39:52.449612 2016 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17.17b2079e87da6c8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.17", UID:"10.200.8.17", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.17 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.17"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 46, 192305290, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 52, 446525766, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.17.17b2079e87da6c8a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:53.119445 kubelet[2016]: E0208 23:39:53.119383 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:53.565969 kubelet[2016]: W0208 23:39:53.565926 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:53.565969 kubelet[2016]: E0208 23:39:53.565968 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:53.611759 kubelet[2016]: W0208 23:39:53.611701 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:53.611759 kubelet[2016]: E0208 23:39:53.611760 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:54.119972 kubelet[2016]: E0208 23:39:54.119910 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:55.057001 kubelet[2016]: W0208 23:39:55.056961 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:55.057001 kubelet[2016]: E0208 23:39:55.057001 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:55.120725 kubelet[2016]: E0208 23:39:55.120668 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:55.652086 kubelet[2016]: W0208 23:39:55.652042 2016 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:55.652086 kubelet[2016]: E0208 23:39:55.652092 2016 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:56.106872 kubelet[2016]: I0208 23:39:56.106814 2016 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:39:56.121119 kubelet[2016]: E0208 23:39:56.121090 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:56.237701 kubelet[2016]: E0208 23:39:56.237664 2016 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.17\" not found" Feb 8 23:39:56.537188 kubelet[2016]: E0208 23:39:56.537143 2016 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.17" not found Feb 8 23:39:57.121931 kubelet[2016]: E0208 23:39:57.121867 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:57.585482 kubelet[2016]: E0208 23:39:57.585443 2016 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.17" not found Feb 8 23:39:58.122036 kubelet[2016]: E0208 23:39:58.121981 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:58.795607 kubelet[2016]: E0208 23:39:58.795565 2016 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.17\" not found" node="10.200.8.17" Feb 8 23:39:58.849087 kubelet[2016]: I0208 23:39:58.849046 2016 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.17" Feb 8 23:39:58.986436 kubelet[2016]: I0208 23:39:58.986394 2016 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.17" Feb 8 23:39:58.997220 kubelet[2016]: E0208 23:39:58.997186 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.098111 kubelet[2016]: E0208 23:39:59.097975 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.122521 kubelet[2016]: E0208 23:39:59.122477 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:59.198895 kubelet[2016]: E0208 23:39:59.198850 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.249888 sudo[1781]: pam_unix(sudo:session): session closed for user root Feb 8 23:39:59.249000 audit[1781]: USER_END pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.254021 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 8 23:39:59.254128 kernel: audit: type=1106 audit(1707435599.249:196): pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.249000 audit[1781]: CRED_DISP pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.282560 kernel: audit: type=1104 audit(1707435599.249:197): pid=1781 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.300021 kubelet[2016]: E0208 23:39:59.299971 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.347965 sshd[1777]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:59.349000 audit[1777]: USER_END pid=1777 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:59.352137 systemd-logind[1403]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:39:59.353708 systemd[1]: sshd@6-10.200.8.17:22-10.200.12.6:51332.service: Deactivated successfully. Feb 8 23:39:59.354660 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:39:59.356428 systemd-logind[1403]: Removed session 9. Feb 8 23:39:59.349000 audit[1777]: CRED_DISP pid=1777 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:59.382428 kernel: audit: type=1106 audit(1707435599.349:198): pid=1777 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:59.382517 kernel: audit: type=1104 audit(1707435599.349:199): pid=1777 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:39:59.382549 kernel: audit: type=1131 audit(1707435599.353:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.12.6:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.12.6:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.400758 kubelet[2016]: E0208 23:39:59.400726 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.501914 kubelet[2016]: E0208 23:39:59.501854 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.603065 kubelet[2016]: E0208 23:39:59.602921 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.703813 kubelet[2016]: E0208 23:39:59.703736 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.804541 kubelet[2016]: E0208 23:39:59.804434 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:39:59.905279 kubelet[2016]: E0208 23:39:59.905146 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.006056 kubelet[2016]: E0208 23:40:00.006004 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.106815 kubelet[2016]: E0208 23:40:00.106760 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.123249 kubelet[2016]: E0208 23:40:00.123212 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:00.207816 kubelet[2016]: E0208 23:40:00.207780 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.308780 kubelet[2016]: E0208 23:40:00.308723 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.409516 kubelet[2016]: E0208 23:40:00.409466 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.510610 kubelet[2016]: E0208 23:40:00.510490 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.611522 kubelet[2016]: E0208 23:40:00.611472 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.712224 kubelet[2016]: E0208 23:40:00.712171 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.813355 kubelet[2016]: E0208 23:40:00.813226 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:00.914082 kubelet[2016]: E0208 23:40:00.914032 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.014981 kubelet[2016]: E0208 23:40:01.014932 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.116067 kubelet[2016]: E0208 23:40:01.115865 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.123327 kubelet[2016]: E0208 23:40:01.123296 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:01.217052 kubelet[2016]: E0208 23:40:01.217005 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.317241 kubelet[2016]: E0208 23:40:01.317193 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.418371 kubelet[2016]: E0208 23:40:01.418235 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.518601 kubelet[2016]: E0208 23:40:01.518548 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.619423 kubelet[2016]: E0208 23:40:01.619368 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.720165 kubelet[2016]: E0208 23:40:01.720100 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.820867 kubelet[2016]: E0208 23:40:01.820816 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:01.921553 kubelet[2016]: E0208 23:40:01.921503 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.022338 kubelet[2016]: E0208 23:40:02.022211 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.123137 kubelet[2016]: E0208 23:40:02.123087 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.124219 kubelet[2016]: E0208 23:40:02.124197 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:02.223807 kubelet[2016]: E0208 23:40:02.223760 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.324791 kubelet[2016]: E0208 23:40:02.324648 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.425388 kubelet[2016]: E0208 23:40:02.425333 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.526291 kubelet[2016]: E0208 23:40:02.526239 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.627232 kubelet[2016]: E0208 23:40:02.627102 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.727761 kubelet[2016]: E0208 23:40:02.727709 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.827926 kubelet[2016]: E0208 23:40:02.827873 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:02.928485 kubelet[2016]: E0208 23:40:02.928355 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.028946 kubelet[2016]: E0208 23:40:03.028895 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.124588 kubelet[2016]: E0208 23:40:03.124512 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:03.129898 kubelet[2016]: E0208 23:40:03.129873 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.230227 kubelet[2016]: E0208 23:40:03.230183 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.330971 kubelet[2016]: E0208 23:40:03.330926 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.431515 kubelet[2016]: E0208 23:40:03.431466 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.532555 kubelet[2016]: E0208 23:40:03.532437 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.633096 kubelet[2016]: E0208 23:40:03.633052 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.733642 kubelet[2016]: E0208 23:40:03.733589 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.834195 kubelet[2016]: E0208 23:40:03.834066 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:03.934601 kubelet[2016]: E0208 23:40:03.934552 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.035148 kubelet[2016]: E0208 23:40:04.035100 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.124891 kubelet[2016]: E0208 23:40:04.124736 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:04.136265 kubelet[2016]: E0208 23:40:04.136231 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.236955 kubelet[2016]: E0208 23:40:04.236907 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.337818 kubelet[2016]: E0208 23:40:04.337763 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.438586 kubelet[2016]: E0208 23:40:04.438454 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.539345 kubelet[2016]: E0208 23:40:04.539295 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.640341 kubelet[2016]: E0208 23:40:04.640292 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.741021 kubelet[2016]: E0208 23:40:04.740975 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.841696 kubelet[2016]: E0208 23:40:04.841653 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:04.942360 kubelet[2016]: E0208 23:40:04.942314 2016 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.17\" not found" Feb 8 23:40:05.043412 kubelet[2016]: I0208 23:40:05.043294 2016 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:40:05.044047 env[1432]: time="2024-02-08T23:40:05.043991167Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:40:05.044561 kubelet[2016]: I0208 23:40:05.044279 2016 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:40:05.125588 kubelet[2016]: E0208 23:40:05.125534 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:05.125588 kubelet[2016]: I0208 23:40:05.125543 2016 apiserver.go:52] "Watching apiserver" Feb 8 23:40:05.128231 kubelet[2016]: I0208 23:40:05.128201 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:40:05.128358 kubelet[2016]: I0208 23:40:05.128305 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:40:05.128430 kubelet[2016]: I0208 23:40:05.128361 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:40:05.129864 kubelet[2016]: E0208 23:40:05.129248 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:05.134202 kubelet[2016]: I0208 23:40:05.134177 2016 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:40:05.146284 kubelet[2016]: I0208 23:40:05.146266 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-tigera-ca-bundle\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146392 kubelet[2016]: I0208 23:40:05.146303 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-node-certs\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146392 kubelet[2016]: I0208 23:40:05.146333 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-cni-bin-dir\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146392 kubelet[2016]: I0208 23:40:05.146361 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-cni-net-dir\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146392 kubelet[2016]: I0208 23:40:05.146390 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71447fa4-d4fe-4672-b40e-d0d97e4a6825-varrun\") pod \"csi-node-driver-tvxct\" (UID: \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\") " pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:05.146569 kubelet[2016]: I0208 23:40:05.146417 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee179a71-8019-4e65-8bb0-db1b19aff379-lib-modules\") pod \"kube-proxy-f92sw\" (UID: \"ee179a71-8019-4e65-8bb0-db1b19aff379\") " pod="kube-system/kube-proxy-f92sw" Feb 8 23:40:05.146569 kubelet[2016]: I0208 23:40:05.146446 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee179a71-8019-4e65-8bb0-db1b19aff379-xtables-lock\") pod \"kube-proxy-f92sw\" (UID: \"ee179a71-8019-4e65-8bb0-db1b19aff379\") " pod="kube-system/kube-proxy-f92sw" Feb 8 23:40:05.146569 kubelet[2016]: I0208 23:40:05.146477 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkccn\" (UniqueName: \"kubernetes.io/projected/ee179a71-8019-4e65-8bb0-db1b19aff379-kube-api-access-hkccn\") pod \"kube-proxy-f92sw\" (UID: \"ee179a71-8019-4e65-8bb0-db1b19aff379\") " pod="kube-system/kube-proxy-f92sw" Feb 8 23:40:05.146569 kubelet[2016]: I0208 23:40:05.146506 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-lib-modules\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146569 kubelet[2016]: I0208 23:40:05.146540 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-xtables-lock\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146787 kubelet[2016]: I0208 23:40:05.146569 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-var-lib-calico\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146787 kubelet[2016]: I0208 23:40:05.146597 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71447fa4-d4fe-4672-b40e-d0d97e4a6825-kubelet-dir\") pod \"csi-node-driver-tvxct\" (UID: \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\") " pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:05.146787 kubelet[2016]: I0208 23:40:05.146627 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71447fa4-d4fe-4672-b40e-d0d97e4a6825-socket-dir\") pod \"csi-node-driver-tvxct\" (UID: \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\") " pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:05.146787 kubelet[2016]: I0208 23:40:05.146656 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-policysync\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146787 kubelet[2016]: I0208 23:40:05.146685 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-var-run-calico\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146998 kubelet[2016]: I0208 23:40:05.146714 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-flexvol-driver-host\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146998 kubelet[2016]: I0208 23:40:05.146780 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k77s6\" (UniqueName: \"kubernetes.io/projected/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-kube-api-access-k77s6\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.146998 kubelet[2016]: I0208 23:40:05.146818 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71447fa4-d4fe-4672-b40e-d0d97e4a6825-registration-dir\") pod \"csi-node-driver-tvxct\" (UID: \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\") " pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:05.146998 kubelet[2016]: I0208 23:40:05.146850 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzntz\" (UniqueName: \"kubernetes.io/projected/71447fa4-d4fe-4672-b40e-d0d97e4a6825-kube-api-access-lzntz\") pod \"csi-node-driver-tvxct\" (UID: \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\") " pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:05.146998 kubelet[2016]: I0208 23:40:05.146886 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee179a71-8019-4e65-8bb0-db1b19aff379-kube-proxy\") pod \"kube-proxy-f92sw\" (UID: \"ee179a71-8019-4e65-8bb0-db1b19aff379\") " pod="kube-system/kube-proxy-f92sw" Feb 8 23:40:05.147200 kubelet[2016]: I0208 23:40:05.146932 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db6ea8db-5aa2-4e1a-aedd-75bebfecb262-cni-log-dir\") pod \"calico-node-glgq5\" (UID: \"db6ea8db-5aa2-4e1a-aedd-75bebfecb262\") " pod="calico-system/calico-node-glgq5" Feb 8 23:40:05.147200 kubelet[2016]: I0208 23:40:05.146944 2016 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:40:05.248960 kubelet[2016]: E0208 23:40:05.248926 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.248960 kubelet[2016]: W0208 23:40:05.248949 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.249250 kubelet[2016]: E0208 23:40:05.249002 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.249319 kubelet[2016]: E0208 23:40:05.249262 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.249319 kubelet[2016]: W0208 23:40:05.249274 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.249319 kubelet[2016]: E0208 23:40:05.249300 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.249544 kubelet[2016]: E0208 23:40:05.249523 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.249544 kubelet[2016]: W0208 23:40:05.249539 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.249686 kubelet[2016]: E0208 23:40:05.249563 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.249884 kubelet[2016]: E0208 23:40:05.249861 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.249995 kubelet[2016]: W0208 23:40:05.249973 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.250139 kubelet[2016]: E0208 23:40:05.250124 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.250338 kubelet[2016]: E0208 23:40:05.250215 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.250430 kubelet[2016]: W0208 23:40:05.250337 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.250554 kubelet[2016]: E0208 23:40:05.250538 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.250692 kubelet[2016]: E0208 23:40:05.250580 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.250805 kubelet[2016]: W0208 23:40:05.250788 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.251112 kubelet[2016]: E0208 23:40:05.251097 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.251224 kubelet[2016]: W0208 23:40:05.251208 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.251525 kubelet[2016]: E0208 23:40:05.251510 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.251637 kubelet[2016]: E0208 23:40:05.251618 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.251708 kubelet[2016]: W0208 23:40:05.251623 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.251708 kubelet[2016]: E0208 23:40:05.251660 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.251901 kubelet[2016]: E0208 23:40:05.251605 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.251998 kubelet[2016]: E0208 23:40:05.251944 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.252073 kubelet[2016]: W0208 23:40:05.251996 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.252073 kubelet[2016]: E0208 23:40:05.252013 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.252197 kubelet[2016]: E0208 23:40:05.252180 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.252197 kubelet[2016]: W0208 23:40:05.252195 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.252309 kubelet[2016]: E0208 23:40:05.252212 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.252452 kubelet[2016]: E0208 23:40:05.252433 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.252532 kubelet[2016]: W0208 23:40:05.252507 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.252532 kubelet[2016]: E0208 23:40:05.252531 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.252736 kubelet[2016]: E0208 23:40:05.252720 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.252835 kubelet[2016]: W0208 23:40:05.252732 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.252835 kubelet[2016]: E0208 23:40:05.252780 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.253002 kubelet[2016]: E0208 23:40:05.252987 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.253002 kubelet[2016]: W0208 23:40:05.252998 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.253132 kubelet[2016]: E0208 23:40:05.253019 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.253217 kubelet[2016]: E0208 23:40:05.253201 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.253217 kubelet[2016]: W0208 23:40:05.253213 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.253342 kubelet[2016]: E0208 23:40:05.253289 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.253429 kubelet[2016]: E0208 23:40:05.253415 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.253429 kubelet[2016]: W0208 23:40:05.253426 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.253555 kubelet[2016]: E0208 23:40:05.253446 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.253643 kubelet[2016]: E0208 23:40:05.253628 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.253643 kubelet[2016]: W0208 23:40:05.253640 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.253784 kubelet[2016]: E0208 23:40:05.253658 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.253895 kubelet[2016]: E0208 23:40:05.253882 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.253895 kubelet[2016]: W0208 23:40:05.253893 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.254001 kubelet[2016]: E0208 23:40:05.253912 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.254468 kubelet[2016]: E0208 23:40:05.254442 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.254468 kubelet[2016]: W0208 23:40:05.254459 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.254565 kubelet[2016]: E0208 23:40:05.254476 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.262373 kubelet[2016]: E0208 23:40:05.262356 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.262373 kubelet[2016]: W0208 23:40:05.262369 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.262482 kubelet[2016]: E0208 23:40:05.262386 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.348507 kubelet[2016]: E0208 23:40:05.348383 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.348712 kubelet[2016]: W0208 23:40:05.348693 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.348861 kubelet[2016]: E0208 23:40:05.348847 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.349241 kubelet[2016]: E0208 23:40:05.349218 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.349360 kubelet[2016]: W0208 23:40:05.349346 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.349467 kubelet[2016]: E0208 23:40:05.349456 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.349819 kubelet[2016]: E0208 23:40:05.349796 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.349939 kubelet[2016]: W0208 23:40:05.349925 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.350053 kubelet[2016]: E0208 23:40:05.350042 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.450765 kubelet[2016]: E0208 23:40:05.450711 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.450765 kubelet[2016]: W0208 23:40:05.450734 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.451046 kubelet[2016]: E0208 23:40:05.450779 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.451115 kubelet[2016]: E0208 23:40:05.451048 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.451115 kubelet[2016]: W0208 23:40:05.451064 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.451115 kubelet[2016]: E0208 23:40:05.451084 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.451307 kubelet[2016]: E0208 23:40:05.451289 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.451307 kubelet[2016]: W0208 23:40:05.451299 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.451408 kubelet[2016]: E0208 23:40:05.451317 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.552344 kubelet[2016]: E0208 23:40:05.552310 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.552344 kubelet[2016]: W0208 23:40:05.552334 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.552618 kubelet[2016]: E0208 23:40:05.552362 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.553167 kubelet[2016]: E0208 23:40:05.552801 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.553167 kubelet[2016]: W0208 23:40:05.552819 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.553167 kubelet[2016]: E0208 23:40:05.552850 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.553167 kubelet[2016]: E0208 23:40:05.553103 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.553167 kubelet[2016]: W0208 23:40:05.553127 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.553167 kubelet[2016]: E0208 23:40:05.553147 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.608103 kubelet[2016]: E0208 23:40:05.607353 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.608307 kubelet[2016]: W0208 23:40:05.608286 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.608412 kubelet[2016]: E0208 23:40:05.608399 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.654019 kubelet[2016]: E0208 23:40:05.653984 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.654019 kubelet[2016]: W0208 23:40:05.654010 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.654282 kubelet[2016]: E0208 23:40:05.654036 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.654706 kubelet[2016]: E0208 23:40:05.654569 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.654706 kubelet[2016]: W0208 23:40:05.654584 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.654706 kubelet[2016]: E0208 23:40:05.654604 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.732499 env[1432]: time="2024-02-08T23:40:05.732439835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-glgq5,Uid:db6ea8db-5aa2-4e1a-aedd-75bebfecb262,Namespace:calico-system,Attempt:0,}" Feb 8 23:40:05.754989 kubelet[2016]: E0208 23:40:05.754968 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.754989 kubelet[2016]: W0208 23:40:05.754984 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.755188 kubelet[2016]: E0208 23:40:05.755005 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.755242 kubelet[2016]: E0208 23:40:05.755211 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.755242 kubelet[2016]: W0208 23:40:05.755221 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.755242 kubelet[2016]: E0208 23:40:05.755236 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.810656 kubelet[2016]: E0208 23:40:05.810635 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.810656 kubelet[2016]: W0208 23:40:05.810649 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.810823 kubelet[2016]: E0208 23:40:05.810676 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.856156 kubelet[2016]: E0208 23:40:05.856134 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.856156 kubelet[2016]: W0208 23:40:05.856149 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.856301 kubelet[2016]: E0208 23:40:05.856168 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:05.956989 kubelet[2016]: E0208 23:40:05.956962 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:05.956989 kubelet[2016]: W0208 23:40:05.956984 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:05.957220 kubelet[2016]: E0208 23:40:05.957010 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:06.008086 kubelet[2016]: E0208 23:40:06.008059 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:06.008086 kubelet[2016]: W0208 23:40:06.008084 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:06.008277 kubelet[2016]: E0208 23:40:06.008107 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:06.033777 env[1432]: time="2024-02-08T23:40:06.033713853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f92sw,Uid:ee179a71-8019-4e65-8bb0-db1b19aff379,Namespace:kube-system,Attempt:0,}" Feb 8 23:40:06.115369 kubelet[2016]: E0208 23:40:06.115334 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:06.126585 kubelet[2016]: E0208 23:40:06.126553 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:06.437498 kubelet[2016]: E0208 23:40:06.436204 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:06.479295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899480836.mount: Deactivated successfully. Feb 8 23:40:06.545067 env[1432]: time="2024-02-08T23:40:06.545010209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.547637 env[1432]: time="2024-02-08T23:40:06.547592850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.558420 env[1432]: time="2024-02-08T23:40:06.558381222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.565660 env[1432]: time="2024-02-08T23:40:06.565617437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.568426 env[1432]: time="2024-02-08T23:40:06.568393582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.571983 env[1432]: time="2024-02-08T23:40:06.571951238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.575026 env[1432]: time="2024-02-08T23:40:06.574991887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.577680 env[1432]: time="2024-02-08T23:40:06.577647829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:06.636612 env[1432]: time="2024-02-08T23:40:06.635591954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:06.636612 env[1432]: time="2024-02-08T23:40:06.635632554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:06.636612 env[1432]: time="2024-02-08T23:40:06.635666355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:06.636612 env[1432]: time="2024-02-08T23:40:06.635858358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26 pid=2145 runtime=io.containerd.runc.v2 Feb 8 23:40:06.637145 env[1432]: time="2024-02-08T23:40:06.634988044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:06.637145 env[1432]: time="2024-02-08T23:40:06.635060445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:06.637145 env[1432]: time="2024-02-08T23:40:06.635093646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:06.637145 env[1432]: time="2024-02-08T23:40:06.635299649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63d717c22e4e124d1d4f60ca8a1e9c38bcd9c0d5bede162d8fb8d2b24d9f4264 pid=2144 runtime=io.containerd.runc.v2 Feb 8 23:40:06.706802 env[1432]: time="2024-02-08T23:40:06.706754289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f92sw,Uid:ee179a71-8019-4e65-8bb0-db1b19aff379,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d717c22e4e124d1d4f60ca8a1e9c38bcd9c0d5bede162d8fb8d2b24d9f4264\"" Feb 8 23:40:06.711949 env[1432]: time="2024-02-08T23:40:06.711903171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-glgq5,Uid:db6ea8db-5aa2-4e1a-aedd-75bebfecb262,Namespace:calico-system,Attempt:0,} returns sandbox id \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\"" Feb 8 23:40:06.712719 env[1432]: time="2024-02-08T23:40:06.712694384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:40:07.127226 kubelet[2016]: E0208 23:40:07.127107 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:07.703588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130868324.mount: Deactivated successfully. Feb 8 23:40:08.127688 kubelet[2016]: E0208 23:40:08.127327 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:08.228836 env[1432]: time="2024-02-08T23:40:08.228776585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:08.233875 env[1432]: time="2024-02-08T23:40:08.233837262Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:08.239337 env[1432]: time="2024-02-08T23:40:08.239301845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:08.242252 env[1432]: time="2024-02-08T23:40:08.242217189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:08.242688 env[1432]: time="2024-02-08T23:40:08.242655996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:40:08.244135 env[1432]: time="2024-02-08T23:40:08.244109818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 8 23:40:08.244974 env[1432]: time="2024-02-08T23:40:08.244933230Z" level=info msg="CreateContainer within sandbox \"63d717c22e4e124d1d4f60ca8a1e9c38bcd9c0d5bede162d8fb8d2b24d9f4264\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:40:08.274666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224476619.mount: Deactivated successfully. Feb 8 23:40:08.294680 env[1432]: time="2024-02-08T23:40:08.294640283Z" level=info msg="CreateContainer within sandbox \"63d717c22e4e124d1d4f60ca8a1e9c38bcd9c0d5bede162d8fb8d2b24d9f4264\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"540b1dee730bf9d78eff02d061647af8d6924822cce7d911de2562f8549058a4\"" Feb 8 23:40:08.295587 env[1432]: time="2024-02-08T23:40:08.295554597Z" level=info msg="StartContainer for \"540b1dee730bf9d78eff02d061647af8d6924822cce7d911de2562f8549058a4\"" Feb 8 23:40:08.354237 env[1432]: time="2024-02-08T23:40:08.354182786Z" level=info msg="StartContainer for \"540b1dee730bf9d78eff02d061647af8d6924822cce7d911de2562f8549058a4\" returns successfully" Feb 8 23:40:08.389000 audit[2267]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.400779 kernel: audit: type=1325 audit(1707435608.389:201): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.392000 audit[2268]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.411764 kernel: audit: type=1325 audit(1707435608.392:202): table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.411836 kernel: audit: type=1300 audit(1707435608.392:202): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff56c644c0 a2=0 a3=7fff56c644ac items=0 ppid=2229 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.392000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff56c644c0 a2=0 a3=7fff56c644ac items=0 ppid=2229 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:40:08.428760 kernel: audit: type=1327 audit(1707435608.392:202): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:40:08.437628 kubelet[2016]: E0208 23:40:08.437254 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:08.393000 audit[2269]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.448413 kernel: audit: type=1325 audit(1707435608.393:203): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.448472 kernel: audit: type=1300 audit(1707435608.393:203): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd25d2b0c0 a2=0 a3=7ffd25d2b0ac items=0 ppid=2229 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.393000 audit[2269]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd25d2b0c0 a2=0 a3=7ffd25d2b0ac items=0 ppid=2229 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.470626 kernel: audit: type=1327 audit(1707435608.393:203): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:40:08.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:40:08.468996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248443232.mount: Deactivated successfully. Feb 8 23:40:08.394000 audit[2270]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.394000 audit[2270]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb4831a00 a2=0 a3=7ffdb48319ec items=0 ppid=2229 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.509651 kernel: audit: type=1325 audit(1707435608.394:204): table=filter:42 family=2 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.509737 kernel: audit: type=1300 audit(1707435608.394:204): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb4831a00 a2=0 a3=7ffdb48319ec items=0 ppid=2229 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:40:08.389000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd708ddf0 a2=0 a3=7fffd708dddc items=0 ppid=2229 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:40:08.520764 kernel: audit: type=1327 audit(1707435608.394:204): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:40:08.396000 audit[2271]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.396000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe68e473d0 a2=0 a3=7ffe68e473bc items=0 ppid=2229 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:40:08.397000 audit[2272]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.397000 audit[2272]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc308e2ac0 a2=0 a3=7ffc308e2aac items=0 ppid=2229 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:40:08.492000 audit[2273]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.492000 audit[2273]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0b6678e0 a2=0 a3=7ffd0b6678cc items=0 ppid=2229 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:40:08.495000 audit[2275]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.495000 audit[2275]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe000b64a0 a2=0 a3=7ffe000b648c items=0 ppid=2229 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 8 23:40:08.499000 audit[2278]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.499000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe8ee658b0 a2=0 a3=7ffe8ee6589c items=0 ppid=2229 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 8 23:40:08.500000 audit[2279]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.500000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7294f720 a2=0 a3=7ffc7294f70c items=0 ppid=2229 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:40:08.502000 audit[2281]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.502000 audit[2281]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2ac0f610 a2=0 a3=7ffd2ac0f5fc items=0 ppid=2229 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:40:08.504000 audit[2282]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.504000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefb76d5c0 a2=0 a3=7ffefb76d5ac items=0 ppid=2229 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:40:08.507000 audit[2284]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.507000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd2ab92b40 a2=0 a3=7ffd2ab92b2c items=0 ppid=2229 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:40:08.512000 audit[2287]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.512000 audit[2287]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff967e8f30 a2=0 a3=7fff967e8f1c items=0 ppid=2229 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 8 23:40:08.513000 audit[2288]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.513000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccfd49ed0 a2=0 a3=7ffccfd49ebc items=0 ppid=2229 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:40:08.516000 audit[2290]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.516000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6b9f35d0 a2=0 a3=7ffd6b9f35bc items=0 ppid=2229 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:40:08.517000 audit[2291]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.517000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9a97ef80 a2=0 a3=7fff9a97ef6c items=0 ppid=2229 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:40:08.521000 audit[2293]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.521000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca0945cc0 a2=0 a3=7ffca0945cac items=0 ppid=2229 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:40:08.526000 audit[2296]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.526000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf2287aa0 a2=0 a3=7ffcf2287a8c items=0 ppid=2229 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:40:08.529000 audit[2299]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.529000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfaae6de0 a2=0 a3=7ffcfaae6dcc items=0 ppid=2229 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:40:08.530000 audit[2300]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.530000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd40dd4010 a2=0 a3=7ffd40dd3ffc items=0 ppid=2229 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:40:08.537000 audit[2302]: NETFILTER_CFG table=nat:60 family=2 entries=2 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.537000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff0789f170 a2=0 a3=7fff0789f15c items=0 ppid=2229 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:40:08.540000 audit[2305]: NETFILTER_CFG table=nat:61 family=2 entries=2 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:40:08.540000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd5bc60740 a2=0 a3=7ffd5bc6072c items=0 ppid=2229 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:40:08.545000 audit[2309]: NETFILTER_CFG table=filter:62 family=2 entries=3 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:08.545000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe79c318a0 a2=0 a3=7ffe79c3188c items=0 ppid=2229 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:08.555281 kubelet[2016]: E0208 23:40:08.555241 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.555281 kubelet[2016]: W0208 23:40:08.555256 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.555281 kubelet[2016]: E0208 23:40:08.555277 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.555494 kubelet[2016]: E0208 23:40:08.555470 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.555494 kubelet[2016]: W0208 23:40:08.555481 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.555597 kubelet[2016]: E0208 23:40:08.555499 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.555675 kubelet[2016]: E0208 23:40:08.555661 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.555732 kubelet[2016]: W0208 23:40:08.555676 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.555732 kubelet[2016]: E0208 23:40:08.555690 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.555940 kubelet[2016]: E0208 23:40:08.555919 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.555940 kubelet[2016]: W0208 23:40:08.555936 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.556072 kubelet[2016]: E0208 23:40:08.555953 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.556137 kubelet[2016]: E0208 23:40:08.556118 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.556137 kubelet[2016]: W0208 23:40:08.556136 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.556249 kubelet[2016]: E0208 23:40:08.556151 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.556333 kubelet[2016]: E0208 23:40:08.556315 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.556333 kubelet[2016]: W0208 23:40:08.556329 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.556441 kubelet[2016]: E0208 23:40:08.556343 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.556559 kubelet[2016]: E0208 23:40:08.556545 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.556559 kubelet[2016]: W0208 23:40:08.556557 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.556684 kubelet[2016]: E0208 23:40:08.556572 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.556758 kubelet[2016]: E0208 23:40:08.556731 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.556758 kubelet[2016]: W0208 23:40:08.556754 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.556862 kubelet[2016]: E0208 23:40:08.556769 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.556963 kubelet[2016]: E0208 23:40:08.556945 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.556963 kubelet[2016]: W0208 23:40:08.556957 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.557060 kubelet[2016]: E0208 23:40:08.556972 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.557205 kubelet[2016]: E0208 23:40:08.557180 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.557205 kubelet[2016]: W0208 23:40:08.557193 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.557319 kubelet[2016]: E0208 23:40:08.557208 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.557385 kubelet[2016]: E0208 23:40:08.557368 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.557433 kubelet[2016]: W0208 23:40:08.557384 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.557433 kubelet[2016]: E0208 23:40:08.557398 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.557569 kubelet[2016]: E0208 23:40:08.557552 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.557569 kubelet[2016]: W0208 23:40:08.557565 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.557673 kubelet[2016]: E0208 23:40:08.557588 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.557796 kubelet[2016]: E0208 23:40:08.557780 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.557796 kubelet[2016]: W0208 23:40:08.557792 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.557926 kubelet[2016]: E0208 23:40:08.557806 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.557975 kubelet[2016]: E0208 23:40:08.557966 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.558022 kubelet[2016]: W0208 23:40:08.557974 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.558022 kubelet[2016]: E0208 23:40:08.557990 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.558161 kubelet[2016]: E0208 23:40:08.558144 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.558161 kubelet[2016]: W0208 23:40:08.558156 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.558283 kubelet[2016]: E0208 23:40:08.558170 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.558339 kubelet[2016]: E0208 23:40:08.558328 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.558381 kubelet[2016]: W0208 23:40:08.558338 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.558381 kubelet[2016]: E0208 23:40:08.558352 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.579674 kubelet[2016]: E0208 23:40:08.579657 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.579795 kubelet[2016]: W0208 23:40:08.579670 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.579795 kubelet[2016]: E0208 23:40:08.579696 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.580015 kubelet[2016]: E0208 23:40:08.579997 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.580015 kubelet[2016]: W0208 23:40:08.580012 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.580174 kubelet[2016]: E0208 23:40:08.580034 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.580425 kubelet[2016]: E0208 23:40:08.580301 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.580425 kubelet[2016]: W0208 23:40:08.580314 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.580425 kubelet[2016]: E0208 23:40:08.580342 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.580603 kubelet[2016]: E0208 23:40:08.580543 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.580603 kubelet[2016]: W0208 23:40:08.580553 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.580603 kubelet[2016]: E0208 23:40:08.580573 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.580788 kubelet[2016]: E0208 23:40:08.580773 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.580841 kubelet[2016]: W0208 23:40:08.580789 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.580841 kubelet[2016]: E0208 23:40:08.580807 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.581073 kubelet[2016]: E0208 23:40:08.581052 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.581073 kubelet[2016]: W0208 23:40:08.581067 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.581192 kubelet[2016]: E0208 23:40:08.581148 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.581492 kubelet[2016]: E0208 23:40:08.581470 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.581492 kubelet[2016]: W0208 23:40:08.581486 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.581633 kubelet[2016]: E0208 23:40:08.581505 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.581707 kubelet[2016]: E0208 23:40:08.581695 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.581772 kubelet[2016]: W0208 23:40:08.581709 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.581772 kubelet[2016]: E0208 23:40:08.581729 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.581945 kubelet[2016]: E0208 23:40:08.581932 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.582021 kubelet[2016]: W0208 23:40:08.581946 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.582021 kubelet[2016]: E0208 23:40:08.581965 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.582187 kubelet[2016]: E0208 23:40:08.582174 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.582254 kubelet[2016]: W0208 23:40:08.582189 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.582254 kubelet[2016]: E0208 23:40:08.582207 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.582569 kubelet[2016]: E0208 23:40:08.582548 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.582569 kubelet[2016]: W0208 23:40:08.582563 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.582716 kubelet[2016]: E0208 23:40:08.582640 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.583759 kubelet[2016]: E0208 23:40:08.582782 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:08.583759 kubelet[2016]: W0208 23:40:08.582794 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:08.583759 kubelet[2016]: E0208 23:40:08.582809 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:08.616000 audit[2309]: NETFILTER_CFG table=nat:63 family=2 entries=68 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:08.616000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe79c318a0 a2=0 a3=7ffe79c3188c items=0 ppid=2229 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:08.651000 audit[2344]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.651000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdc9a82d10 a2=0 a3=7ffdc9a82cfc items=0 ppid=2229 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:40:08.655000 audit[2346]: NETFILTER_CFG table=filter:65 family=10 entries=2 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.655000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffff6a7a0e0 a2=0 a3=7ffff6a7a0cc items=0 ppid=2229 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 8 23:40:08.659000 audit[2349]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.659000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff3ffdb1c0 a2=0 a3=7fff3ffdb1ac items=0 ppid=2229 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 8 23:40:08.661000 audit[2350]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.661000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb5f405d0 a2=0 a3=7ffeb5f405bc items=0 ppid=2229 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:40:08.663000 audit[2352]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.663000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd12f06510 a2=0 a3=7ffd12f064fc items=0 ppid=2229 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:40:08.664000 audit[2353]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.664000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8cf54070 a2=0 a3=7fff8cf5405c items=0 ppid=2229 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:40:08.666000 audit[2355]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_rule pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.666000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef2b84f00 a2=0 a3=7ffef2b84eec items=0 ppid=2229 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 8 23:40:08.670000 audit[2358]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.670000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff4e3cce60 a2=0 a3=7fff4e3cce4c items=0 ppid=2229 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:40:08.671000 audit[2359]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.671000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc378ce950 a2=0 a3=7ffc378ce93c items=0 ppid=2229 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:40:08.673000 audit[2361]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.673000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff62204a00 a2=0 a3=7fff622049ec items=0 ppid=2229 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:40:08.674000 audit[2362]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.674000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1ff1c830 a2=0 a3=7ffc1ff1c81c items=0 ppid=2229 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:40:08.677000 audit[2364]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.677000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdeb7e3850 a2=0 a3=7ffdeb7e383c items=0 ppid=2229 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:40:08.681000 audit[2367]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.681000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffee685080 a2=0 a3=7fffee68506c items=0 ppid=2229 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:40:08.684000 audit[2370]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.684000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca847d700 a2=0 a3=7ffca847d6ec items=0 ppid=2229 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 8 23:40:08.685000 audit[2371]: NETFILTER_CFG table=nat:78 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.685000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7b830a70 a2=0 a3=7ffc7b830a5c items=0 ppid=2229 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:40:08.687000 audit[2373]: NETFILTER_CFG table=nat:79 family=10 entries=2 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.687000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe670c27f0 a2=0 a3=7ffe670c27dc items=0 ppid=2229 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:40:08.690000 audit[2376]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:40:08.690000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc1e8549e0 a2=0 a3=7ffc1e8549cc items=0 ppid=2229 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.690000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:40:08.696000 audit[2380]: NETFILTER_CFG table=filter:81 family=10 entries=3 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:40:08.696000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc764c0500 a2=0 a3=7ffc764c04ec items=0 ppid=2229 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.696000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:08.696000 audit[2380]: NETFILTER_CFG table=nat:82 family=10 entries=10 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:40:08.696000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffc764c0500 a2=0 a3=7ffc764c04ec items=0 ppid=2229 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:08.696000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:09.127676 kubelet[2016]: E0208 23:40:09.127646 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:09.565514 kubelet[2016]: E0208 23:40:09.565481 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.565514 kubelet[2016]: W0208 23:40:09.565505 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.565843 kubelet[2016]: E0208 23:40:09.565533 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.565843 kubelet[2016]: E0208 23:40:09.565799 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.565843 kubelet[2016]: W0208 23:40:09.565816 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.565843 kubelet[2016]: E0208 23:40:09.565836 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.566101 kubelet[2016]: E0208 23:40:09.566047 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.566101 kubelet[2016]: W0208 23:40:09.566058 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.566101 kubelet[2016]: E0208 23:40:09.566076 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.566350 kubelet[2016]: E0208 23:40:09.566329 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.566350 kubelet[2016]: W0208 23:40:09.566346 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.566505 kubelet[2016]: E0208 23:40:09.566365 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.566573 kubelet[2016]: E0208 23:40:09.566562 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.566632 kubelet[2016]: W0208 23:40:09.566573 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.566632 kubelet[2016]: E0208 23:40:09.566590 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.566812 kubelet[2016]: E0208 23:40:09.566791 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.566812 kubelet[2016]: W0208 23:40:09.566808 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.566947 kubelet[2016]: E0208 23:40:09.566828 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.567083 kubelet[2016]: E0208 23:40:09.567064 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.567083 kubelet[2016]: W0208 23:40:09.567078 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.567236 kubelet[2016]: E0208 23:40:09.567097 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.567299 kubelet[2016]: E0208 23:40:09.567290 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.567355 kubelet[2016]: W0208 23:40:09.567301 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.567355 kubelet[2016]: E0208 23:40:09.567321 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.567516 kubelet[2016]: E0208 23:40:09.567498 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.567516 kubelet[2016]: W0208 23:40:09.567514 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.567645 kubelet[2016]: E0208 23:40:09.567531 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.567880 kubelet[2016]: E0208 23:40:09.567859 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.567880 kubelet[2016]: W0208 23:40:09.567875 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.568057 kubelet[2016]: E0208 23:40:09.567895 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.568142 kubelet[2016]: E0208 23:40:09.568126 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.568203 kubelet[2016]: W0208 23:40:09.568155 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.568203 kubelet[2016]: E0208 23:40:09.568175 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.568428 kubelet[2016]: E0208 23:40:09.568408 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.568515 kubelet[2016]: W0208 23:40:09.568431 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.568515 kubelet[2016]: E0208 23:40:09.568449 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.568715 kubelet[2016]: E0208 23:40:09.568690 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.568819 kubelet[2016]: W0208 23:40:09.568716 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.568819 kubelet[2016]: E0208 23:40:09.568734 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.568954 kubelet[2016]: E0208 23:40:09.568946 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.569016 kubelet[2016]: W0208 23:40:09.568958 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.569016 kubelet[2016]: E0208 23:40:09.568976 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.569187 kubelet[2016]: E0208 23:40:09.569165 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.569187 kubelet[2016]: W0208 23:40:09.569182 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.569333 kubelet[2016]: E0208 23:40:09.569201 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.569413 kubelet[2016]: E0208 23:40:09.569397 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.569488 kubelet[2016]: W0208 23:40:09.569415 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.569488 kubelet[2016]: E0208 23:40:09.569432 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.588240 kubelet[2016]: E0208 23:40:09.588213 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.588240 kubelet[2016]: W0208 23:40:09.588232 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.588240 kubelet[2016]: E0208 23:40:09.588254 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.588576 kubelet[2016]: E0208 23:40:09.588557 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.588659 kubelet[2016]: W0208 23:40:09.588572 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.588659 kubelet[2016]: E0208 23:40:09.588607 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.588942 kubelet[2016]: E0208 23:40:09.588921 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.588942 kubelet[2016]: W0208 23:40:09.588936 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.589103 kubelet[2016]: E0208 23:40:09.588960 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.589223 kubelet[2016]: E0208 23:40:09.589206 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.589223 kubelet[2016]: W0208 23:40:09.589220 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.589378 kubelet[2016]: E0208 23:40:09.589244 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.589482 kubelet[2016]: E0208 23:40:09.589462 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.589482 kubelet[2016]: W0208 23:40:09.589478 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.589617 kubelet[2016]: E0208 23:40:09.589500 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.589806 kubelet[2016]: E0208 23:40:09.589788 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.589806 kubelet[2016]: W0208 23:40:09.589803 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.589960 kubelet[2016]: E0208 23:40:09.589900 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.590265 kubelet[2016]: E0208 23:40:09.590246 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.590265 kubelet[2016]: W0208 23:40:09.590260 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.590434 kubelet[2016]: E0208 23:40:09.590283 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.590555 kubelet[2016]: E0208 23:40:09.590539 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.590555 kubelet[2016]: W0208 23:40:09.590552 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.590695 kubelet[2016]: E0208 23:40:09.590575 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.590848 kubelet[2016]: E0208 23:40:09.590831 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.590848 kubelet[2016]: W0208 23:40:09.590845 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.590997 kubelet[2016]: E0208 23:40:09.590870 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.591169 kubelet[2016]: E0208 23:40:09.591142 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.591169 kubelet[2016]: W0208 23:40:09.591167 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.591313 kubelet[2016]: E0208 23:40:09.591192 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.591594 kubelet[2016]: E0208 23:40:09.591575 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.591594 kubelet[2016]: W0208 23:40:09.591592 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.591791 kubelet[2016]: E0208 23:40:09.591688 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:09.591888 kubelet[2016]: E0208 23:40:09.591868 2016 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:40:09.591888 kubelet[2016]: W0208 23:40:09.591884 2016 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:40:09.591985 kubelet[2016]: E0208 23:40:09.591901 2016 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:40:10.128363 kubelet[2016]: E0208 23:40:10.128306 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:10.437531 kubelet[2016]: E0208 23:40:10.436844 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:11.129014 kubelet[2016]: E0208 23:40:11.128979 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:12.129116 kubelet[2016]: E0208 23:40:12.129078 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:12.437214 kubelet[2016]: E0208 23:40:12.436347 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:12.655496 env[1432]: time="2024-02-08T23:40:12.655445319Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:12.661706 env[1432]: time="2024-02-08T23:40:12.661666404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:12.666822 env[1432]: time="2024-02-08T23:40:12.666788775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:12.671180 env[1432]: time="2024-02-08T23:40:12.671146134Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:12.671983 env[1432]: time="2024-02-08T23:40:12.671952345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 8 23:40:12.674384 env[1432]: time="2024-02-08T23:40:12.674354978Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 8 23:40:12.697922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783113188.mount: Deactivated successfully. Feb 8 23:40:12.712012 env[1432]: time="2024-02-08T23:40:12.711968594Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445\"" Feb 8 23:40:12.712521 env[1432]: time="2024-02-08T23:40:12.712494401Z" level=info msg="StartContainer for \"16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445\"" Feb 8 23:40:12.781153 env[1432]: time="2024-02-08T23:40:12.781108742Z" level=info msg="StartContainer for \"16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445\" returns successfully" Feb 8 23:40:13.129652 kubelet[2016]: E0208 23:40:13.129583 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:13.598267 kubelet[2016]: I0208 23:40:13.499014 2016 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f92sw" podStartSLOduration=-9.223372021355808e+09 pod.CreationTimestamp="2024-02-08 23:39:58 +0000 UTC" firstStartedPulling="2024-02-08 23:40:06.712059573 +0000 UTC m=+20.972811758" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:08.487922613 +0000 UTC m=+22.748674798" watchObservedRunningTime="2024-02-08 23:40:13.49896702 +0000 UTC m=+27.759719205" Feb 8 23:40:13.610803 env[1432]: time="2024-02-08T23:40:13.610730615Z" level=info msg="shim disconnected" id=16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445 Feb 8 23:40:13.610803 env[1432]: time="2024-02-08T23:40:13.610796916Z" level=warning msg="cleaning up after shim disconnected" id=16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445 namespace=k8s.io Feb 8 23:40:13.611008 env[1432]: time="2024-02-08T23:40:13.610811516Z" level=info msg="cleaning up dead shim" Feb 8 23:40:13.618628 env[1432]: time="2024-02-08T23:40:13.618587421Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2460 runtime=io.containerd.runc.v2\n" Feb 8 23:40:13.694307 systemd[1]: run-containerd-runc-k8s.io-16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445-runc.uA4pc4.mount: Deactivated successfully. Feb 8 23:40:13.694500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ff773da191faba1db621687afc3d688d3e40e44123c9fb46439f8b2cf96445-rootfs.mount: Deactivated successfully. Feb 8 23:40:14.129774 kubelet[2016]: E0208 23:40:14.129707 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:14.437183 kubelet[2016]: E0208 23:40:14.437048 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:14.491391 env[1432]: time="2024-02-08T23:40:14.491353939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 8 23:40:15.129863 kubelet[2016]: E0208 23:40:15.129824 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:16.130117 kubelet[2016]: E0208 23:40:16.130053 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:16.436526 kubelet[2016]: E0208 23:40:16.436372 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:16.527399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242843754.mount: Deactivated successfully. Feb 8 23:40:17.131267 kubelet[2016]: E0208 23:40:17.131228 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:18.132094 kubelet[2016]: E0208 23:40:18.132053 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:18.438004 kubelet[2016]: E0208 23:40:18.436845 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:19.132900 kubelet[2016]: E0208 23:40:19.132859 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:20.133423 kubelet[2016]: E0208 23:40:20.133384 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:20.437364 kubelet[2016]: E0208 23:40:20.437252 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:21.134161 kubelet[2016]: E0208 23:40:21.134116 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:22.010031 env[1432]: time="2024-02-08T23:40:22.009974064Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:22.014289 env[1432]: time="2024-02-08T23:40:22.014243110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:22.020777 env[1432]: time="2024-02-08T23:40:22.020710880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:22.024355 env[1432]: time="2024-02-08T23:40:22.024321819Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:22.025018 env[1432]: time="2024-02-08T23:40:22.024983026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 8 23:40:22.027466 env[1432]: time="2024-02-08T23:40:22.027433452Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 8 23:40:22.086332 env[1432]: time="2024-02-08T23:40:22.086274989Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063\"" Feb 8 23:40:22.087070 env[1432]: time="2024-02-08T23:40:22.087039097Z" level=info msg="StartContainer for \"17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063\"" Feb 8 23:40:22.113400 systemd[1]: run-containerd-runc-k8s.io-17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063-runc.j0OQfP.mount: Deactivated successfully. Feb 8 23:40:22.134927 kubelet[2016]: E0208 23:40:22.134894 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:22.157607 env[1432]: time="2024-02-08T23:40:22.153093011Z" level=info msg="StartContainer for \"17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063\" returns successfully" Feb 8 23:40:22.437617 kubelet[2016]: E0208 23:40:22.436331 2016 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:23.135293 kubelet[2016]: E0208 23:40:23.135230 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:23.836022 env[1432]: time="2024-02-08T23:40:23.835952803Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:40:23.863194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063-rootfs.mount: Deactivated successfully. Feb 8 23:40:23.893157 kubelet[2016]: I0208 23:40:23.893129 2016 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:40:24.136251 kubelet[2016]: E0208 23:40:24.136117 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:25.493175 kubelet[2016]: E0208 23:40:25.136762 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:25.494449 env[1432]: time="2024-02-08T23:40:25.494400469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvxct,Uid:71447fa4-d4fe-4672-b40e-d0d97e4a6825,Namespace:calico-system,Attempt:0,}" Feb 8 23:40:25.512842 env[1432]: time="2024-02-08T23:40:25.512795754Z" level=info msg="shim disconnected" id=17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063 Feb 8 23:40:25.512842 env[1432]: time="2024-02-08T23:40:25.512839055Z" level=warning msg="cleaning up after shim disconnected" id=17d8fcf9ae785e6251e8897657c078c695148ad8a8b032659271f78f19bd1063 namespace=k8s.io Feb 8 23:40:25.513025 env[1432]: time="2024-02-08T23:40:25.512850555Z" level=info msg="cleaning up dead shim" Feb 8 23:40:25.521920 env[1432]: time="2024-02-08T23:40:25.521883946Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2529 runtime=io.containerd.runc.v2\n" Feb 8 23:40:25.583727 env[1432]: time="2024-02-08T23:40:25.583651570Z" level=error msg="Failed to destroy network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:25.585986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9-shm.mount: Deactivated successfully. Feb 8 23:40:25.586993 env[1432]: time="2024-02-08T23:40:25.586943304Z" level=error msg="encountered an error cleaning up failed sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:25.587095 env[1432]: time="2024-02-08T23:40:25.587031505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvxct,Uid:71447fa4-d4fe-4672-b40e-d0d97e4a6825,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:25.587656 kubelet[2016]: E0208 23:40:25.587287 2016 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:25.587656 kubelet[2016]: E0208 23:40:25.587349 2016 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:25.587656 kubelet[2016]: E0208 23:40:25.587369 2016 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvxct" Feb 8 23:40:25.587851 kubelet[2016]: E0208 23:40:25.587437 2016 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tvxct_calico-system(71447fa4-d4fe-4672-b40e-d0d97e4a6825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tvxct_calico-system(71447fa4-d4fe-4672-b40e-d0d97e4a6825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:26.115492 kubelet[2016]: E0208 23:40:26.115440 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:26.137731 kubelet[2016]: E0208 23:40:26.137670 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:26.524726 env[1432]: time="2024-02-08T23:40:26.524674667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 8 23:40:26.525653 kubelet[2016]: I0208 23:40:26.525632 2016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:26.526479 env[1432]: time="2024-02-08T23:40:26.526438885Z" level=info msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" Feb 8 23:40:26.549812 env[1432]: time="2024-02-08T23:40:26.549760915Z" level=error msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" failed" error="failed to destroy network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:26.550073 kubelet[2016]: E0208 23:40:26.550050 2016 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:26.550190 kubelet[2016]: E0208 23:40:26.550107 2016 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9} Feb 8 23:40:26.550190 kubelet[2016]: E0208 23:40:26.550181 2016 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:40:26.550330 kubelet[2016]: E0208 23:40:26.550219 2016 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71447fa4-d4fe-4672-b40e-d0d97e4a6825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tvxct" podUID=71447fa4-d4fe-4672-b40e-d0d97e4a6825 Feb 8 23:40:27.138354 kubelet[2016]: E0208 23:40:27.138295 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:27.609064 kubelet[2016]: I0208 23:40:27.609019 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:40:27.792543 kubelet[2016]: I0208 23:40:27.792495 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478bp\" (UniqueName: \"kubernetes.io/projected/fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f-kube-api-access-478bp\") pod \"nginx-deployment-8ffc5cf85-g89tw\" (UID: \"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f\") " pod="default/nginx-deployment-8ffc5cf85-g89tw" Feb 8 23:40:27.913065 env[1432]: time="2024-02-08T23:40:27.912964097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g89tw,Uid:fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f,Namespace:default,Attempt:0,}" Feb 8 23:40:27.999377 env[1432]: time="2024-02-08T23:40:27.999324832Z" level=error msg="Failed to destroy network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:28.002502 env[1432]: time="2024-02-08T23:40:28.002330161Z" level=error msg="encountered an error cleaning up failed sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:28.002502 env[1432]: time="2024-02-08T23:40:28.002391862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g89tw,Uid:fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:28.001615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818-shm.mount: Deactivated successfully. Feb 8 23:40:28.002965 kubelet[2016]: E0208 23:40:28.002647 2016 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:28.002965 kubelet[2016]: E0208 23:40:28.002706 2016 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-g89tw" Feb 8 23:40:28.002965 kubelet[2016]: E0208 23:40:28.002736 2016 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-g89tw" Feb 8 23:40:28.003116 kubelet[2016]: E0208 23:40:28.002826 2016 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-g89tw_default(fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-g89tw_default(fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-g89tw" podUID=fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f Feb 8 23:40:28.138998 kubelet[2016]: E0208 23:40:28.138940 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:28.536963 kubelet[2016]: I0208 23:40:28.536925 2016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:28.537707 env[1432]: time="2024-02-08T23:40:28.537650427Z" level=info msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" Feb 8 23:40:28.561031 env[1432]: time="2024-02-08T23:40:28.560980948Z" level=error msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" failed" error="failed to destroy network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:40:28.561256 kubelet[2016]: E0208 23:40:28.561233 2016 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:28.561353 kubelet[2016]: E0208 23:40:28.561289 2016 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818} Feb 8 23:40:28.561353 kubelet[2016]: E0208 23:40:28.561336 2016 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:40:28.561483 kubelet[2016]: E0208 23:40:28.561389 2016 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-g89tw" podUID=fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f Feb 8 23:40:29.139871 kubelet[2016]: E0208 23:40:29.139826 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:30.140715 kubelet[2016]: E0208 23:40:30.140662 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:31.141258 kubelet[2016]: E0208 23:40:31.141203 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:32.141597 kubelet[2016]: E0208 23:40:32.141549 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:33.142592 kubelet[2016]: E0208 23:40:33.142531 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:34.143220 kubelet[2016]: E0208 23:40:34.143176 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:34.871575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978798523.mount: Deactivated successfully. Feb 8 23:40:34.988935 env[1432]: time="2024-02-08T23:40:34.988874495Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:34.995394 env[1432]: time="2024-02-08T23:40:34.995352649Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:34.999667 env[1432]: time="2024-02-08T23:40:34.999635685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:35.003669 env[1432]: time="2024-02-08T23:40:35.003638318Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:35.004046 env[1432]: time="2024-02-08T23:40:35.004014321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 8 23:40:35.019356 env[1432]: time="2024-02-08T23:40:35.019319646Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 8 23:40:35.059607 env[1432]: time="2024-02-08T23:40:35.059556576Z" level=info msg="CreateContainer within sandbox \"979fccd3787aa00a3cf5082e983eed1d504d2e87449bc8ebf7e035ab82f02e26\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0\"" Feb 8 23:40:35.060318 env[1432]: time="2024-02-08T23:40:35.060278782Z" level=info msg="StartContainer for \"7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0\"" Feb 8 23:40:35.113779 env[1432]: time="2024-02-08T23:40:35.112796911Z" level=info msg="StartContainer for \"7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0\" returns successfully" Feb 8 23:40:35.144267 kubelet[2016]: E0208 23:40:35.144166 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:35.392999 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 8 23:40:35.393168 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 8 23:40:35.583593 kubelet[2016]: I0208 23:40:35.583563 2016 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-glgq5" podStartSLOduration=-9.223372000271257e+09 pod.CreationTimestamp="2024-02-08 23:39:59 +0000 UTC" firstStartedPulling="2024-02-08 23:40:06.71313099 +0000 UTC m=+20.973883275" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:35.583122161 +0000 UTC m=+49.843874446" watchObservedRunningTime="2024-02-08 23:40:35.583518264 +0000 UTC m=+49.844270549" Feb 8 23:40:36.145128 kubelet[2016]: E0208 23:40:36.145070 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:36.579798 systemd[1]: run-containerd-runc-k8s.io-7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0-runc.CmaiAR.mount: Deactivated successfully. Feb 8 23:40:36.713964 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 8 23:40:36.714112 kernel: audit: type=1400 audit(1707435636.693:245): avc: denied { write } for pid=2796 comm="tee" name="fd" dev="proc" ino=22481 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.693000 audit[2796]: AVC avc: denied { write } for pid=2796 comm="tee" name="fd" dev="proc" ino=22481 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.693000 audit[2796]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd9439a984 a2=241 a3=1b6 items=1 ppid=2758 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.749436 kernel: audit: type=1300 audit(1707435636.693:245): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd9439a984 a2=241 a3=1b6 items=1 ppid=2758 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.693000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 8 23:40:36.765760 kernel: audit: type=1307 audit(1707435636.693:245): cwd="/etc/service/enabled/cni/log" Feb 8 23:40:36.693000 audit: PATH item=0 name="/dev/fd/63" inode=22458 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.784833 kernel: audit: type=1302 audit(1707435636.693:245): item=0 name="/dev/fd/63" inode=22458 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.812001 kernel: audit: type=1327 audit(1707435636.693:245): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.812105 kernel: audit: type=1400 audit(1707435636.717:246): avc: denied { write } for pid=2802 comm="tee" name="fd" dev="proc" ino=23493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.717000 audit[2802]: AVC avc: denied { write } for pid=2802 comm="tee" name="fd" dev="proc" ino=23493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.832937 kernel: audit: type=1300 audit(1707435636.717:246): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0285f982 a2=241 a3=1b6 items=1 ppid=2760 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.717000 audit[2802]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0285f982 a2=241 a3=1b6 items=1 ppid=2760 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.717000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 8 23:40:36.853868 kernel: audit: type=1307 audit(1707435636.717:246): cwd="/etc/service/enabled/bird6/log" Feb 8 23:40:36.853964 kernel: audit: type=1302 audit(1707435636.717:246): item=0 name="/dev/fd/63" inode=22470 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.717000 audit: PATH item=0 name="/dev/fd/63" inode=22470 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.866160 kernel: audit: type=1327 audit(1707435636.717:246): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.754000 audit[2818]: AVC avc: denied { write } for pid=2818 comm="tee" name="fd" dev="proc" ino=23518 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.754000 audit[2818]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc59600983 a2=241 a3=1b6 items=1 ppid=2773 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.754000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 8 23:40:36.754000 audit: PATH item=0 name="/dev/fd/63" inode=23500 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.754000 audit[2820]: AVC avc: denied { write } for pid=2820 comm="tee" name="fd" dev="proc" ino=23522 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.754000 audit[2820]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde6371982 a2=241 a3=1b6 items=1 ppid=2764 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.754000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 8 23:40:36.754000 audit: PATH item=0 name="/dev/fd/63" inode=23503 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.781000 audit[2822]: AVC avc: denied { write } for pid=2822 comm="tee" name="fd" dev="proc" ino=22487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.781000 audit[2822]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1c450982 a2=241 a3=1b6 items=1 ppid=2762 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.781000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 8 23:40:36.781000 audit: PATH item=0 name="/dev/fd/63" inode=23508 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.784000 audit[2824]: AVC avc: denied { write } for pid=2824 comm="tee" name="fd" dev="proc" ino=22491 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.784000 audit[2824]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4b0a4972 a2=241 a3=1b6 items=1 ppid=2770 pid=2824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.784000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 8 23:40:36.784000 audit: PATH item=0 name="/dev/fd/63" inode=23509 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:36.784000 audit[2832]: AVC avc: denied { write } for pid=2832 comm="tee" name="fd" dev="proc" ino=23529 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:40:36.784000 audit[2832]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe6882e973 a2=241 a3=1b6 items=1 ppid=2772 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.784000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 8 23:40:36.784000 audit: PATH item=0 name="/dev/fd/63" inode=23526 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:36.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:40:37.003050 kernel: Initializing XFRM netlink socket Feb 8 23:40:37.147046 kubelet[2016]: E0208 23:40:37.145712 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit: BPF prog-id=10 op=LOAD Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe07f07070 a2=70 a3=7f9dbd884000 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit: BPF prog-id=11 op=LOAD Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe07f07070 a2=70 a3=6e items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe07f07020 a2=70 a3=7ffe07f07070 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit: BPF prog-id=12 op=LOAD Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe07f07000 a2=70 a3=7ffe07f07070 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe07f070e0 a2=70 a3=0 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe07f070d0 a2=70 a3=0 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.144000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.144000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe07f07110 a2=70 a3=0 items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.144000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { bpf } for pid=2901 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: AVC avc: denied { perfmon } for pid=2901 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.145000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe07f07030 a2=70 a3=ffffffff items=0 ppid=2763 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.145000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:40:37.148000 audit[2903]: AVC avc: denied { bpf } for pid=2903 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.148000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc26cd6700 a2=70 a3=fff80800 items=0 ppid=2763 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:40:37.148000 audit[2903]: AVC avc: denied { bpf } for pid=2903 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:40:37.148000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc26cd65d0 a2=70 a3=3 items=0 ppid=2763 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:40:37.154000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:40:37.247000 audit[2927]: NETFILTER_CFG table=mangle:83 family=2 entries=19 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:37.247000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffc4d8bad20 a2=0 a3=7ffc4d8bad0c items=0 ppid=2763 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.247000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:37.257000 audit[2926]: NETFILTER_CFG table=filter:84 family=2 entries=39 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:37.257000 audit[2926]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7fff90211610 a2=0 a3=7fff902115fc items=0 ppid=2763 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.257000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:37.258000 audit[2929]: NETFILTER_CFG table=nat:85 family=2 entries=16 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:37.258000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc99663170 a2=0 a3=563e87166000 items=0 ppid=2763 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.258000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:37.271000 audit[2925]: NETFILTER_CFG table=raw:86 family=2 entries=19 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:37.271000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7fff89af2c30 a2=0 a3=56324b017000 items=0 ppid=2763 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:37.271000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:38.060891 systemd-networkd[1578]: vxlan.calico: Link UP Feb 8 23:40:38.060900 systemd-networkd[1578]: vxlan.calico: Gained carrier Feb 8 23:40:38.146488 kubelet[2016]: E0208 23:40:38.146433 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:39.146951 kubelet[2016]: E0208 23:40:39.146895 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:39.437518 env[1432]: time="2024-02-08T23:40:39.437302897Z" level=info msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] k8s.go 578: Cleaning up netns ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" iface="eth0" netns="/var/run/netns/cni-9781d949-9b13-b05a-4c7b-0ba902f0cc81" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" iface="eth0" netns="/var/run/netns/cni-9781d949-9b13-b05a-4c7b-0ba902f0cc81" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" iface="eth0" netns="/var/run/netns/cni-9781d949-9b13-b05a-4c7b-0ba902f0cc81" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] k8s.go 585: Releasing IP address(es) ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.479 [INFO][2958] utils.go 188: Calico CNI releasing IP address ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.497 [INFO][2964] ipam_plugin.go 415: Releasing address using handleID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.497 [INFO][2964] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.498 [INFO][2964] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.505 [WARNING][2964] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.505 [INFO][2964] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.506 [INFO][2964] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:39.508250 env[1432]: 2024-02-08 23:40:39.507 [INFO][2958] k8s.go 591: Teardown processing complete. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:39.512759 env[1432]: time="2024-02-08T23:40:39.508416336Z" level=info msg="TearDown network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" successfully" Feb 8 23:40:39.512759 env[1432]: time="2024-02-08T23:40:39.508456436Z" level=info msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" returns successfully" Feb 8 23:40:39.510121 systemd[1]: run-netns-cni\x2d9781d949\x2d9b13\x2db05a\x2d4c7b\x2d0ba902f0cc81.mount: Deactivated successfully. Feb 8 23:40:39.513516 env[1432]: time="2024-02-08T23:40:39.513480174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvxct,Uid:71447fa4-d4fe-4672-b40e-d0d97e4a6825,Namespace:calico-system,Attempt:1,}" Feb 8 23:40:39.665028 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:39.665138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9616f004dad: link becomes ready Feb 8 23:40:39.655557 systemd-networkd[1578]: cali9616f004dad: Link UP Feb 8 23:40:39.665796 systemd-networkd[1578]: cali9616f004dad: Gained carrier Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.591 [INFO][2971] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.17-k8s-csi--node--driver--tvxct-eth0 csi-node-driver- calico-system 71447fa4-d4fe-4672-b40e-d0d97e4a6825 1246 0 2024-02-08 23:39:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.17 csi-node-driver-tvxct eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9616f004dad [] []}} ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.591 [INFO][2971] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.621 [INFO][2982] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" HandleID="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.632 [INFO][2982] ipam_plugin.go 268: Auto assigning IP ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" HandleID="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022ec00), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.17", "pod":"csi-node-driver-tvxct", "timestamp":"2024-02-08 23:40:39.621950095 +0000 UTC"}, Hostname:"10.200.8.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.632 [INFO][2982] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.632 [INFO][2982] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.632 [INFO][2982] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.17' Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.633 [INFO][2982] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.636 [INFO][2982] ipam.go 372: Looking up existing affinities for host host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.640 [INFO][2982] ipam.go 489: Trying affinity for 192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.641 [INFO][2982] ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.642 [INFO][2982] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.643 [INFO][2982] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.644 [INFO][2982] ipam.go 1682: Creating new handle: k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142 Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.647 [INFO][2982] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.651 [INFO][2982] ipam.go 1216: Successfully claimed IPs: [192.168.120.193/26] block=192.168.120.192/26 handle="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.651 [INFO][2982] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.193/26] handle="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" host="10.200.8.17" Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.651 [INFO][2982] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:39.691606 env[1432]: 2024-02-08 23:40:39.651 [INFO][2982] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.120.193/26] IPv6=[] ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" HandleID="k8s-pod-network.65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.653 [INFO][2971] k8s.go 385: Populated endpoint ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-csi--node--driver--tvxct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71447fa4-d4fe-4672-b40e-d0d97e4a6825", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 39, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"", Pod:"csi-node-driver-tvxct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9616f004dad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.653 [INFO][2971] k8s.go 386: Calico CNI using IPs: [192.168.120.193/32] ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.653 [INFO][2971] dataplane_linux.go 68: Setting the host side veth name to cali9616f004dad ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.667 [INFO][2971] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.667 [INFO][2971] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-csi--node--driver--tvxct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71447fa4-d4fe-4672-b40e-d0d97e4a6825", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 39, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142", Pod:"csi-node-driver-tvxct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9616f004dad", MAC:"a6:1e:e7:a2:75:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:39.692518 env[1432]: 2024-02-08 23:40:39.676 [INFO][2971] k8s.go 491: Wrote updated endpoint to datastore ContainerID="65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142" Namespace="calico-system" Pod="csi-node-driver-tvxct" WorkloadEndpoint="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:39.699000 audit[3005]: NETFILTER_CFG table=filter:87 family=2 entries=36 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:39.699000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffcd64c2e80 a2=0 a3=7ffcd64c2e6c items=0 ppid=2763 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:39.699000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:39.720436 env[1432]: time="2024-02-08T23:40:39.720362140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:39.720436 env[1432]: time="2024-02-08T23:40:39.720402841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:39.720647 env[1432]: time="2024-02-08T23:40:39.720611242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:39.720875 env[1432]: time="2024-02-08T23:40:39.720835344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142 pid=3014 runtime=io.containerd.runc.v2 Feb 8 23:40:39.764825 env[1432]: time="2024-02-08T23:40:39.764781677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvxct,Uid:71447fa4-d4fe-4672-b40e-d0d97e4a6825,Namespace:calico-system,Attempt:1,} returns sandbox id \"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142\"" Feb 8 23:40:39.766541 env[1432]: time="2024-02-08T23:40:39.766515690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 8 23:40:39.959921 systemd-networkd[1578]: vxlan.calico: Gained IPv6LL Feb 8 23:40:40.147585 kubelet[2016]: E0208 23:40:40.147537 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:41.148541 kubelet[2016]: E0208 23:40:41.148486 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:41.367025 systemd-networkd[1578]: cali9616f004dad: Gained IPv6LL Feb 8 23:40:41.930452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123162626.mount: Deactivated successfully. Feb 8 23:40:42.148650 kubelet[2016]: E0208 23:40:42.148611 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:42.719256 env[1432]: time="2024-02-08T23:40:42.719208232Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:42.726059 env[1432]: time="2024-02-08T23:40:42.726019481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:42.732406 env[1432]: time="2024-02-08T23:40:42.732377727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:42.739636 env[1432]: time="2024-02-08T23:40:42.739603078Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:42.740167 env[1432]: time="2024-02-08T23:40:42.740132182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 8 23:40:42.742132 env[1432]: time="2024-02-08T23:40:42.742096396Z" level=info msg="CreateContainer within sandbox \"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 8 23:40:42.778820 env[1432]: time="2024-02-08T23:40:42.778776159Z" level=info msg="CreateContainer within sandbox \"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e64e9fe6984035f0420d6527c8e1c40918ce65efdbe595f3939135b311e34746\"" Feb 8 23:40:42.779416 env[1432]: time="2024-02-08T23:40:42.779359263Z" level=info msg="StartContainer for \"e64e9fe6984035f0420d6527c8e1c40918ce65efdbe595f3939135b311e34746\"" Feb 8 23:40:42.841340 env[1432]: time="2024-02-08T23:40:42.841303407Z" level=info msg="StartContainer for \"e64e9fe6984035f0420d6527c8e1c40918ce65efdbe595f3939135b311e34746\" returns successfully" Feb 8 23:40:42.842585 env[1432]: time="2024-02-08T23:40:42.842556316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 8 23:40:43.149236 kubelet[2016]: E0208 23:40:43.149108 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:43.437390 env[1432]: time="2024-02-08T23:40:43.437189818Z" level=info msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.472 [INFO][3101] k8s.go 578: Cleaning up netns ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.472 [INFO][3101] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" iface="eth0" netns="/var/run/netns/cni-9d11be7c-290b-1e03-4257-a23b203091f5" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.472 [INFO][3101] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" iface="eth0" netns="/var/run/netns/cni-9d11be7c-290b-1e03-4257-a23b203091f5" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.472 [INFO][3101] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" iface="eth0" netns="/var/run/netns/cni-9d11be7c-290b-1e03-4257-a23b203091f5" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.473 [INFO][3101] k8s.go 585: Releasing IP address(es) ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.473 [INFO][3101] utils.go 188: Calico CNI releasing IP address ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.489 [INFO][3107] ipam_plugin.go 415: Releasing address using handleID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.489 [INFO][3107] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.489 [INFO][3107] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.495 [WARNING][3107] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.495 [INFO][3107] ipam_plugin.go 443: Releasing address using workloadID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.497 [INFO][3107] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:43.499119 env[1432]: 2024-02-08 23:40:43.498 [INFO][3101] k8s.go 591: Teardown processing complete. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:43.502971 env[1432]: time="2024-02-08T23:40:43.502513177Z" level=info msg="TearDown network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" successfully" Feb 8 23:40:43.502971 env[1432]: time="2024-02-08T23:40:43.502559678Z" level=info msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" returns successfully" Feb 8 23:40:43.501812 systemd[1]: run-netns-cni\x2d9d11be7c\x2d290b\x2d1e03\x2d4257\x2da23b203091f5.mount: Deactivated successfully. Feb 8 23:40:43.503408 env[1432]: time="2024-02-08T23:40:43.503376684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g89tw,Uid:fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f,Namespace:default,Attempt:1,}" Feb 8 23:40:43.645825 systemd-networkd[1578]: calid70ada7b175: Link UP Feb 8 23:40:43.656213 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:43.656314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid70ada7b175: link becomes ready Feb 8 23:40:43.659841 systemd-networkd[1578]: calid70ada7b175: Gained carrier Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.582 [INFO][3113] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0 nginx-deployment-8ffc5cf85- default fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f 1265 0 2024-02-08 23:40:27 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.17 nginx-deployment-8ffc5cf85-g89tw eth0 default [] [] [kns.default ksa.default.default] calid70ada7b175 [] []}} ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.582 [INFO][3113] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.606 [INFO][3125] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" HandleID="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.616 [INFO][3125] ipam_plugin.go 268: Auto assigning IP ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" HandleID="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d980), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.17", "pod":"nginx-deployment-8ffc5cf85-g89tw", "timestamp":"2024-02-08 23:40:43.606212407 +0000 UTC"}, Hostname:"10.200.8.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.616 [INFO][3125] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.616 [INFO][3125] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.617 [INFO][3125] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.17' Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.618 [INFO][3125] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.621 [INFO][3125] ipam.go 372: Looking up existing affinities for host host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.624 [INFO][3125] ipam.go 489: Trying affinity for 192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.626 [INFO][3125] ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.628 [INFO][3125] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.628 [INFO][3125] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.629 [INFO][3125] ipam.go 1682: Creating new handle: k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.632 [INFO][3125] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.638 [INFO][3125] ipam.go 1216: Successfully claimed IPs: [192.168.120.194/26] block=192.168.120.192/26 handle="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.638 [INFO][3125] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.194/26] handle="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" host="10.200.8.17" Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.638 [INFO][3125] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:43.667083 env[1432]: 2024-02-08 23:40:43.638 [INFO][3125] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.120.194/26] IPv6=[] ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" HandleID="k8s-pod-network.bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.639 [INFO][3113] k8s.go 385: Populated endpoint ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-g89tw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid70ada7b175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.640 [INFO][3113] k8s.go 386: Calico CNI using IPs: [192.168.120.194/32] ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.640 [INFO][3113] dataplane_linux.go 68: Setting the host side veth name to calid70ada7b175 ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.659 [INFO][3113] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.660 [INFO][3113] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c", Pod:"nginx-deployment-8ffc5cf85-g89tw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid70ada7b175", MAC:"f2:10:d3:9f:f2:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:43.669714 env[1432]: 2024-02-08 23:40:43.665 [INFO][3113] k8s.go 491: Wrote updated endpoint to datastore ContainerID="bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c" Namespace="default" Pod="nginx-deployment-8ffc5cf85-g89tw" WorkloadEndpoint="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:43.690000 audit[3149]: NETFILTER_CFG table=filter:88 family=2 entries=40 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:43.695965 kernel: kauditd_printk_skb: 119 callbacks suppressed Feb 8 23:40:43.696048 kernel: audit: type=1325 audit(1707435643.690:271): table=filter:88 family=2 entries=40 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:43.697852 env[1432]: time="2024-02-08T23:40:43.697801751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:43.698009 env[1432]: time="2024-02-08T23:40:43.697988252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:43.698108 env[1432]: time="2024-02-08T23:40:43.698090953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:43.698321 env[1432]: time="2024-02-08T23:40:43.698296254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c pid=3158 runtime=io.containerd.runc.v2 Feb 8 23:40:43.690000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fff020f65b0 a2=0 a3=7fff020f659c items=0 ppid=2763 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:43.690000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:43.745209 kernel: audit: type=1300 audit(1707435643.690:271): arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fff020f65b0 a2=0 a3=7fff020f659c items=0 ppid=2763 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:43.745321 kernel: audit: type=1327 audit(1707435643.690:271): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:43.781618 env[1432]: time="2024-02-08T23:40:43.781573840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g89tw,Uid:fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f,Namespace:default,Attempt:1,} returns sandbox id \"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c\"" Feb 8 23:40:44.149845 kubelet[2016]: E0208 23:40:44.149775 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:45.150301 kubelet[2016]: E0208 23:40:45.150253 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:45.335216 systemd-networkd[1578]: calid70ada7b175: Gained IPv6LL Feb 8 23:40:45.967485 env[1432]: time="2024-02-08T23:40:45.967418948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:45.973131 env[1432]: time="2024-02-08T23:40:45.973090087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:45.977976 env[1432]: time="2024-02-08T23:40:45.977944720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:45.981216 env[1432]: time="2024-02-08T23:40:45.981185042Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:45.981596 env[1432]: time="2024-02-08T23:40:45.981565444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 8 23:40:45.982816 env[1432]: time="2024-02-08T23:40:45.982560151Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:40:45.983777 env[1432]: time="2024-02-08T23:40:45.983732559Z" level=info msg="CreateContainer within sandbox \"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 8 23:40:46.021055 env[1432]: time="2024-02-08T23:40:46.021006610Z" level=info msg="CreateContainer within sandbox \"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c65f291956945f34d23114a289049d8af28026ce191fdb64cbefa64bdf22407f\"" Feb 8 23:40:46.021555 env[1432]: time="2024-02-08T23:40:46.021525213Z" level=info msg="StartContainer for \"c65f291956945f34d23114a289049d8af28026ce191fdb64cbefa64bdf22407f\"" Feb 8 23:40:46.081860 env[1432]: time="2024-02-08T23:40:46.080858709Z" level=info msg="StartContainer for \"c65f291956945f34d23114a289049d8af28026ce191fdb64cbefa64bdf22407f\" returns successfully" Feb 8 23:40:46.115890 kubelet[2016]: E0208 23:40:46.115846 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:46.125993 env[1432]: time="2024-02-08T23:40:46.125948109Z" level=info msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" Feb 8 23:40:46.151122 kubelet[2016]: E0208 23:40:46.151074 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.162 [WARNING][3241] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-csi--node--driver--tvxct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71447fa4-d4fe-4672-b40e-d0d97e4a6825", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 39, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142", Pod:"csi-node-driver-tvxct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9616f004dad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.162 [INFO][3241] k8s.go 578: Cleaning up netns ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.162 [INFO][3241] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" iface="eth0" netns="" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.162 [INFO][3241] k8s.go 585: Releasing IP address(es) ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.162 [INFO][3241] utils.go 188: Calico CNI releasing IP address ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.180 [INFO][3247] ipam_plugin.go 415: Releasing address using handleID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.180 [INFO][3247] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.180 [INFO][3247] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.187 [WARNING][3247] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.187 [INFO][3247] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.188 [INFO][3247] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:46.190229 env[1432]: 2024-02-08 23:40:46.189 [INFO][3241] k8s.go 591: Teardown processing complete. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.191056 env[1432]: time="2024-02-08T23:40:46.191008643Z" level=info msg="TearDown network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" successfully" Feb 8 23:40:46.191168 env[1432]: time="2024-02-08T23:40:46.191148244Z" level=info msg="StopPodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" returns successfully" Feb 8 23:40:46.191898 env[1432]: time="2024-02-08T23:40:46.191866849Z" level=info msg="RemovePodSandbox for \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" Feb 8 23:40:46.192153 env[1432]: time="2024-02-08T23:40:46.192098451Z" level=info msg="Forcibly stopping sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\"" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.227 [WARNING][3265] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-csi--node--driver--tvxct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71447fa4-d4fe-4672-b40e-d0d97e4a6825", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 39, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"65ed49bc6a43b2d855eada19529d747d28bf6fb4e6fbef15bf86e113b27ce142", Pod:"csi-node-driver-tvxct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9616f004dad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.227 [INFO][3265] k8s.go 578: Cleaning up netns ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.227 [INFO][3265] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" iface="eth0" netns="" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.228 [INFO][3265] k8s.go 585: Releasing IP address(es) ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.228 [INFO][3265] utils.go 188: Calico CNI releasing IP address ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.248 [INFO][3271] ipam_plugin.go 415: Releasing address using handleID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.248 [INFO][3271] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.248 [INFO][3271] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.256 [WARNING][3271] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.256 [INFO][3271] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" HandleID="k8s-pod-network.c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Workload="10.200.8.17-k8s-csi--node--driver--tvxct-eth0" Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.257 [INFO][3271] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:46.260025 env[1432]: 2024-02-08 23:40:46.258 [INFO][3265] k8s.go 591: Teardown processing complete. ContainerID="c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9" Feb 8 23:40:46.260025 env[1432]: time="2024-02-08T23:40:46.258947597Z" level=info msg="TearDown network for sandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" successfully" Feb 8 23:40:46.267922 env[1432]: time="2024-02-08T23:40:46.267885956Z" level=info msg="RemovePodSandbox \"c55d2342700117d989da10b8ceb4a06de4aa543f0ec55bc5447223aac85455b9\" returns successfully" Feb 8 23:40:46.268351 env[1432]: time="2024-02-08T23:40:46.268325859Z" level=info msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" Feb 8 23:40:46.274862 kubelet[2016]: I0208 23:40:46.274840 2016 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 8 23:40:46.274862 kubelet[2016]: I0208 23:40:46.274875 2016 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.314 [WARNING][3290] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c", Pod:"nginx-deployment-8ffc5cf85-g89tw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid70ada7b175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.315 [INFO][3290] k8s.go 578: Cleaning up netns ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.315 [INFO][3290] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" iface="eth0" netns="" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.315 [INFO][3290] k8s.go 585: Releasing IP address(es) ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.315 [INFO][3290] utils.go 188: Calico CNI releasing IP address ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.334 [INFO][3296] ipam_plugin.go 415: Releasing address using handleID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.335 [INFO][3296] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.335 [INFO][3296] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.343 [WARNING][3296] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.343 [INFO][3296] ipam_plugin.go 443: Releasing address using workloadID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.344 [INFO][3296] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:46.346337 env[1432]: 2024-02-08 23:40:46.345 [INFO][3290] k8s.go 591: Teardown processing complete. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.346937 env[1432]: time="2024-02-08T23:40:46.346367680Z" level=info msg="TearDown network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" successfully" Feb 8 23:40:46.346937 env[1432]: time="2024-02-08T23:40:46.346408280Z" level=info msg="StopPodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" returns successfully" Feb 8 23:40:46.347160 env[1432]: time="2024-02-08T23:40:46.347127485Z" level=info msg="RemovePodSandbox for \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" Feb 8 23:40:46.347243 env[1432]: time="2024-02-08T23:40:46.347162685Z" level=info msg="Forcibly stopping sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\"" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.378 [WARNING][3314] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"fb5f12ea-b383-4ce2-87fb-29a67a1c0a1f", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c", Pod:"nginx-deployment-8ffc5cf85-g89tw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid70ada7b175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.379 [INFO][3314] k8s.go 578: Cleaning up netns ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.379 [INFO][3314] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" iface="eth0" netns="" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.379 [INFO][3314] k8s.go 585: Releasing IP address(es) ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.379 [INFO][3314] utils.go 188: Calico CNI releasing IP address ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.396 [INFO][3320] ipam_plugin.go 415: Releasing address using handleID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.396 [INFO][3320] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.396 [INFO][3320] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.403 [WARNING][3320] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.403 [INFO][3320] ipam_plugin.go 443: Releasing address using workloadID ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" HandleID="k8s-pod-network.39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Workload="10.200.8.17-k8s-nginx--deployment--8ffc5cf85--g89tw-eth0" Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.404 [INFO][3320] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:46.406536 env[1432]: 2024-02-08 23:40:46.405 [INFO][3314] k8s.go 591: Teardown processing complete. ContainerID="39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818" Feb 8 23:40:46.406536 env[1432]: time="2024-02-08T23:40:46.406513081Z" level=info msg="TearDown network for sandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" successfully" Feb 8 23:40:46.416000 env[1432]: time="2024-02-08T23:40:46.415954844Z" level=info msg="RemovePodSandbox \"39a4af1779eb6eb0318d0735877b6444176722d12971f5af9003ae89d50dd818\" returns successfully" Feb 8 23:40:46.604403 kubelet[2016]: I0208 23:40:46.604282 2016 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tvxct" podStartSLOduration=-9.223371988250523e+09 pod.CreationTimestamp="2024-02-08 23:39:58 +0000 UTC" firstStartedPulling="2024-02-08 23:40:39.766041686 +0000 UTC m=+54.026793871" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:46.604208299 +0000 UTC m=+60.864960484" watchObservedRunningTime="2024-02-08 23:40:46.6042527 +0000 UTC m=+60.865004885" Feb 8 23:40:47.151903 kubelet[2016]: E0208 23:40:47.151856 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:48.153033 kubelet[2016]: E0208 23:40:48.152976 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:49.153765 kubelet[2016]: E0208 23:40:49.153688 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:49.700469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178398500.mount: Deactivated successfully. Feb 8 23:40:50.154179 kubelet[2016]: E0208 23:40:50.154119 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:50.656545 env[1432]: time="2024-02-08T23:40:50.656489177Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:50.669967 env[1432]: time="2024-02-08T23:40:50.669917260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:50.678161 env[1432]: time="2024-02-08T23:40:50.678102611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:50.686114 env[1432]: time="2024-02-08T23:40:50.686086461Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:50.686715 env[1432]: time="2024-02-08T23:40:50.686687665Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:40:50.688758 env[1432]: time="2024-02-08T23:40:50.688713578Z" level=info msg="CreateContainer within sandbox \"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:40:50.776311 env[1432]: time="2024-02-08T23:40:50.776262124Z" level=info msg="CreateContainer within sandbox \"bcede9c73b6194e71edfc70af86c081a23f260dc4e138124e131450b2e80ea5c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"357e8fca054f5cefd5a66ab5f823534f89f8b03bfe937e8a95af86243aaf5fa8\"" Feb 8 23:40:50.776848 env[1432]: time="2024-02-08T23:40:50.776817027Z" level=info msg="StartContainer for \"357e8fca054f5cefd5a66ab5f823534f89f8b03bfe937e8a95af86243aaf5fa8\"" Feb 8 23:40:50.837840 env[1432]: time="2024-02-08T23:40:50.837802908Z" level=info msg="StartContainer for \"357e8fca054f5cefd5a66ab5f823534f89f8b03bfe937e8a95af86243aaf5fa8\" returns successfully" Feb 8 23:40:51.154987 kubelet[2016]: E0208 23:40:51.154928 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:51.758568 systemd[1]: run-containerd-runc-k8s.io-357e8fca054f5cefd5a66ab5f823534f89f8b03bfe937e8a95af86243aaf5fa8-runc.DYUKUo.mount: Deactivated successfully. Feb 8 23:40:52.155907 kubelet[2016]: E0208 23:40:52.155772 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:53.156513 kubelet[2016]: E0208 23:40:53.156454 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:54.156654 kubelet[2016]: E0208 23:40:54.156589 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:55.157648 kubelet[2016]: E0208 23:40:55.157588 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:56.157997 kubelet[2016]: E0208 23:40:56.157944 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:57.158344 kubelet[2016]: E0208 23:40:57.158270 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:57.176000 audit[3419]: NETFILTER_CFG table=filter:89 family=2 entries=18 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.176000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd132ca560 a2=0 a3=7ffd132ca54c items=0 ppid=2229 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.198755 kubelet[2016]: I0208 23:40:57.196985 2016 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-g89tw" podStartSLOduration=-9.223372006657827e+09 pod.CreationTimestamp="2024-02-08 23:40:27 +0000 UTC" firstStartedPulling="2024-02-08 23:40:43.782945649 +0000 UTC m=+58.043697834" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:51.616003801 +0000 UTC m=+65.876756086" watchObservedRunningTime="2024-02-08 23:40:57.196947661 +0000 UTC m=+71.457699946" Feb 8 23:40:57.198755 kubelet[2016]: I0208 23:40:57.197124 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:40:57.208629 kernel: audit: type=1325 audit(1707435657.176:272): table=filter:89 family=2 entries=18 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.208732 kernel: audit: type=1300 audit(1707435657.176:272): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd132ca560 a2=0 a3=7ffd132ca54c items=0 ppid=2229 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.229753 kernel: audit: type=1327 audit(1707435657.176:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.229841 kernel: audit: type=1325 audit(1707435657.177:273): table=nat:90 family=2 entries=94 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.177000 audit[3419]: NETFILTER_CFG table=nat:90 family=2 entries=94 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.177000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd132ca560 a2=0 a3=7ffd132ca54c items=0 ppid=2229 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.248982 kernel: audit: type=1300 audit(1707435657.177:273): arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd132ca560 a2=0 a3=7ffd132ca54c items=0 ppid=2229 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.249055 kernel: audit: type=1327 audit(1707435657.177:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.234000 audit[3447]: NETFILTER_CFG table=filter:91 family=2 entries=30 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.234000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffdced3f4b0 a2=0 a3=7ffdced3f49c items=0 ppid=2229 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.286168 kernel: audit: type=1325 audit(1707435657.234:274): table=filter:91 family=2 entries=30 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.286239 kernel: audit: type=1300 audit(1707435657.234:274): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffdced3f4b0 a2=0 a3=7ffdced3f49c items=0 ppid=2229 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.286265 kernel: audit: type=1327 audit(1707435657.234:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.234000 audit[3447]: NETFILTER_CFG table=nat:92 family=2 entries=94 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.304945 kernel: audit: type=1325 audit(1707435657.234:275): table=nat:92 family=2 entries=94 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.234000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffdced3f4b0 a2=0 a3=7ffdced3f49c items=0 ppid=2229 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.354504 kubelet[2016]: I0208 23:40:57.354454 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxjh\" (UniqueName: \"kubernetes.io/projected/52eacb81-e711-4b8e-8b87-026d29e7a1c5-kube-api-access-vsxjh\") pod \"nfs-server-provisioner-0\" (UID: \"52eacb81-e711-4b8e-8b87-026d29e7a1c5\") " pod="default/nfs-server-provisioner-0" Feb 8 23:40:57.354684 kubelet[2016]: I0208 23:40:57.354517 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/52eacb81-e711-4b8e-8b87-026d29e7a1c5-data\") pod \"nfs-server-provisioner-0\" (UID: \"52eacb81-e711-4b8e-8b87-026d29e7a1c5\") " pod="default/nfs-server-provisioner-0" Feb 8 23:40:57.501340 env[1432]: time="2024-02-08T23:40:57.501289265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:52eacb81-e711-4b8e-8b87-026d29e7a1c5,Namespace:default,Attempt:0,}" Feb 8 23:40:57.642425 systemd-networkd[1578]: cali60e51b789ff: Link UP Feb 8 23:40:57.656579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:57.656675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 8 23:40:57.657414 systemd-networkd[1578]: cali60e51b789ff: Gained carrier Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.581 [INFO][3456] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.17-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 52eacb81-e711-4b8e-8b87-026d29e7a1c5 1328 0 2024-02-08 23:40:57 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.17 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.581 [INFO][3456] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.606 [INFO][3468] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" HandleID="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Workload="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.617 [INFO][3468] ipam_plugin.go 268: Auto assigning IP ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" HandleID="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Workload="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a1f50), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.17", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-08 23:40:57.606848456 +0000 UTC"}, Hostname:"10.200.8.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.617 [INFO][3468] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.617 [INFO][3468] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.617 [INFO][3468] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.17' Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.618 [INFO][3468] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.622 [INFO][3468] ipam.go 372: Looking up existing affinities for host host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.626 [INFO][3468] ipam.go 489: Trying affinity for 192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.627 [INFO][3468] ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.629 [INFO][3468] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.629 [INFO][3468] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.631 [INFO][3468] ipam.go 1682: Creating new handle: k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9 Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.634 [INFO][3468] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.638 [INFO][3468] ipam.go 1216: Successfully claimed IPs: [192.168.120.195/26] block=192.168.120.192/26 handle="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.638 [INFO][3468] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.195/26] handle="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" host="10.200.8.17" Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.638 [INFO][3468] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:40:57.665772 env[1432]: 2024-02-08 23:40:57.639 [INFO][3468] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.120.195/26] IPv6=[] ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" HandleID="k8s-pod-network.dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Workload="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.666549 env[1432]: 2024-02-08 23:40:57.640 [INFO][3456] k8s.go 385: Populated endpoint ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"52eacb81-e711-4b8e-8b87-026d29e7a1c5", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:57.666549 env[1432]: 2024-02-08 23:40:57.640 [INFO][3456] k8s.go 386: Calico CNI using IPs: [192.168.120.195/32] ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.666549 env[1432]: 2024-02-08 23:40:57.640 [INFO][3456] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.666549 env[1432]: 2024-02-08 23:40:57.657 [INFO][3456] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.667019 env[1432]: 2024-02-08 23:40:57.658 [INFO][3456] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"52eacb81-e711-4b8e-8b87-026d29e7a1c5", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"da:72:40:90:03:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:40:57.667019 env[1432]: 2024-02-08 23:40:57.664 [INFO][3456] k8s.go 491: Wrote updated endpoint to datastore ContainerID="dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.17-k8s-nfs--server--provisioner--0-eth0" Feb 8 23:40:57.681000 audit[3487]: NETFILTER_CFG table=filter:93 family=2 entries=38 op=nft_register_chain pid=3487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:40:57.681000 audit[3487]: SYSCALL arch=c000003e syscall=46 success=yes exit=19500 a0=3 a1=7ffeefd7ca50 a2=0 a3=7ffeefd7ca3c items=0 ppid=2763 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.681000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:40:57.693303 env[1432]: time="2024-02-08T23:40:57.693237540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:57.693303 env[1432]: time="2024-02-08T23:40:57.693271740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:57.693467 env[1432]: time="2024-02-08T23:40:57.693285341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:57.693754 env[1432]: time="2024-02-08T23:40:57.693669043Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9 pid=3499 runtime=io.containerd.runc.v2 Feb 8 23:40:57.755649 env[1432]: time="2024-02-08T23:40:57.755561989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:52eacb81-e711-4b8e-8b87-026d29e7a1c5,Namespace:default,Attempt:0,} returns sandbox id \"dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9\"" Feb 8 23:40:57.757327 env[1432]: time="2024-02-08T23:40:57.757297599Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:40:58.159038 kubelet[2016]: E0208 23:40:58.158915 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:58.468197 systemd[1]: run-containerd-runc-k8s.io-dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9-runc.ZQuZc1.mount: Deactivated successfully. Feb 8 23:40:59.159430 kubelet[2016]: E0208 23:40:59.159376 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:59.542882 systemd-networkd[1578]: cali60e51b789ff: Gained IPv6LL Feb 8 23:41:00.160386 kubelet[2016]: E0208 23:41:00.160320 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:00.432613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623663963.mount: Deactivated successfully. Feb 8 23:41:01.161030 kubelet[2016]: E0208 23:41:01.160952 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:01.764677 systemd[1]: run-containerd-runc-k8s.io-7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0-runc.RXDYq7.mount: Deactivated successfully. Feb 8 23:41:02.162030 kubelet[2016]: E0208 23:41:02.161918 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:02.760785 env[1432]: time="2024-02-08T23:41:02.760722764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:02.796997 env[1432]: time="2024-02-08T23:41:02.796458151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:02.814810 env[1432]: time="2024-02-08T23:41:02.814760946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:02.821265 env[1432]: time="2024-02-08T23:41:02.821220080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:02.822038 env[1432]: time="2024-02-08T23:41:02.822000384Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:41:02.824383 env[1432]: time="2024-02-08T23:41:02.824350696Z" level=info msg="CreateContainer within sandbox \"dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:41:02.881040 env[1432]: time="2024-02-08T23:41:02.880989792Z" level=info msg="CreateContainer within sandbox \"dbbb910ed9e33d9b4d644b9f79af45fd1727d057b620998cb6af7ce274b01fd9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3f1a9b1bbb41af75b5a3eb02fa5ac770951958a7682af4a9a2bfcf51b65a0e21\"" Feb 8 23:41:02.881590 env[1432]: time="2024-02-08T23:41:02.881545595Z" level=info msg="StartContainer for \"3f1a9b1bbb41af75b5a3eb02fa5ac770951958a7682af4a9a2bfcf51b65a0e21\"" Feb 8 23:41:02.939927 env[1432]: time="2024-02-08T23:41:02.939874300Z" level=info msg="StartContainer for \"3f1a9b1bbb41af75b5a3eb02fa5ac770951958a7682af4a9a2bfcf51b65a0e21\" returns successfully" Feb 8 23:41:03.163128 kubelet[2016]: E0208 23:41:03.162967 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:03.644242 kubelet[2016]: I0208 23:41:03.644209 2016 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372030210611e+09 pod.CreationTimestamp="2024-02-08 23:40:57 +0000 UTC" firstStartedPulling="2024-02-08 23:40:57.756820096 +0000 UTC m=+72.017572281" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:41:03.643924135 +0000 UTC m=+77.904676320" watchObservedRunningTime="2024-02-08 23:41:03.644165337 +0000 UTC m=+77.904917522" Feb 8 23:41:03.693775 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 8 23:41:03.693908 kernel: audit: type=1325 audit(1707435663.687:277): table=filter:94 family=2 entries=18 op=nft_register_rule pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.687000 audit[3626]: NETFILTER_CFG table=filter:94 family=2 entries=18 op=nft_register_rule pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.687000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffea1ee0080 a2=0 a3=7ffea1ee006c items=0 ppid=2229 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.720845 kernel: audit: type=1300 audit(1707435663.687:277): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffea1ee0080 a2=0 a3=7ffea1ee006c items=0 ppid=2229 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.720964 kernel: audit: type=1327 audit(1707435663.687:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.692000 audit[3626]: NETFILTER_CFG table=nat:95 family=2 entries=178 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.739368 kernel: audit: type=1325 audit(1707435663.692:278): table=nat:95 family=2 entries=178 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.739464 kernel: audit: type=1300 audit(1707435663.692:278): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffea1ee0080 a2=0 a3=7ffea1ee006c items=0 ppid=2229 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.692000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffea1ee0080 a2=0 a3=7ffea1ee006c items=0 ppid=2229 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.768764 kernel: audit: type=1327 audit(1707435663.692:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:04.163533 kubelet[2016]: E0208 23:41:04.163468 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:05.164139 kubelet[2016]: E0208 23:41:05.164067 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:06.116165 kubelet[2016]: E0208 23:41:06.116109 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:06.164775 kubelet[2016]: E0208 23:41:06.164678 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:07.165127 kubelet[2016]: E0208 23:41:07.165075 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:08.165646 kubelet[2016]: E0208 23:41:08.165592 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:09.166573 kubelet[2016]: E0208 23:41:09.166514 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:10.167243 kubelet[2016]: E0208 23:41:10.167187 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:11.168152 kubelet[2016]: E0208 23:41:11.168093 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:12.168545 kubelet[2016]: E0208 23:41:12.168486 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:13.169157 kubelet[2016]: E0208 23:41:13.169101 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:14.170229 kubelet[2016]: E0208 23:41:14.170172 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:15.170949 kubelet[2016]: E0208 23:41:15.170890 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:16.171185 kubelet[2016]: E0208 23:41:16.171125 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:17.171791 kubelet[2016]: E0208 23:41:17.171718 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:18.172579 kubelet[2016]: E0208 23:41:18.172525 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:19.172835 kubelet[2016]: E0208 23:41:19.172775 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:20.173714 kubelet[2016]: E0208 23:41:20.173652 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:21.174284 kubelet[2016]: E0208 23:41:21.174219 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:22.175169 kubelet[2016]: E0208 23:41:22.175107 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:23.176088 kubelet[2016]: E0208 23:41:23.176018 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:24.177059 kubelet[2016]: E0208 23:41:24.177007 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:25.177890 kubelet[2016]: E0208 23:41:25.177834 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:26.115669 kubelet[2016]: E0208 23:41:26.115608 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:26.178251 kubelet[2016]: E0208 23:41:26.178204 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:27.179050 kubelet[2016]: E0208 23:41:27.178995 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.172073 kubelet[2016]: I0208 23:41:28.172023 2016 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:41:28.179763 kubelet[2016]: E0208 23:41:28.179704 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.346538 kubelet[2016]: I0208 23:41:28.346478 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmm7\" (UniqueName: \"kubernetes.io/projected/fd4e7cc3-053c-4c5e-8aae-e516cad6ced0-kube-api-access-8gmm7\") pod \"test-pod-1\" (UID: \"fd4e7cc3-053c-4c5e-8aae-e516cad6ced0\") " pod="default/test-pod-1" Feb 8 23:41:28.346812 kubelet[2016]: I0208 23:41:28.346557 2016 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dbf17a9e-3dba-4e49-a817-5913400a8a94\" (UniqueName: \"kubernetes.io/nfs/fd4e7cc3-053c-4c5e-8aae-e516cad6ced0-pvc-dbf17a9e-3dba-4e49-a817-5913400a8a94\") pod \"test-pod-1\" (UID: \"fd4e7cc3-053c-4c5e-8aae-e516cad6ced0\") " pod="default/test-pod-1" Feb 8 23:41:28.530515 kernel: Failed to create system directory netfs Feb 8 23:41:28.530670 kernel: audit: type=1400 audit(1707435688.507:279): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.530703 kernel: Failed to create system directory netfs Feb 8 23:41:28.507000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.549609 kernel: audit: type=1400 audit(1707435688.507:279): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.549759 kernel: Failed to create system directory netfs Feb 8 23:41:28.507000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.572962 kernel: audit: type=1400 audit(1707435688.507:279): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.573113 kernel: Failed to create system directory netfs Feb 8 23:41:28.573143 kernel: audit: type=1400 audit(1707435688.507:279): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.507000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.507000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.507000 audit[3662]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556f6ffe55e0 a1=153bc a2=556f6f7fc2b0 a3=5 items=0 ppid=50 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:28.608911 kernel: audit: type=1300 audit(1707435688.507:279): arch=c000003e syscall=175 success=yes exit=0 a0=556f6ffe55e0 a1=153bc a2=556f6f7fc2b0 a3=5 items=0 ppid=50 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:28.507000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 8 23:41:28.643415 kernel: audit: type=1327 audit(1707435688.507:279): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 8 23:41:28.643570 kernel: Failed to create system directory fscache Feb 8 23:41:28.643595 kernel: audit: type=1400 audit(1707435688.612:280): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.643621 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.662211 kernel: audit: type=1400 audit(1707435688.612:280): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.662346 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.681653 kernel: audit: type=1400 audit(1707435688.612:280): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.681802 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.701241 kernel: audit: type=1400 audit(1707435688.612:280): avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.701372 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.709198 kernel: Failed to create system directory fscache Feb 8 23:41:28.709302 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.715871 kernel: Failed to create system directory fscache Feb 8 23:41:28.715949 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.722716 kernel: Failed to create system directory fscache Feb 8 23:41:28.722808 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.729648 kernel: Failed to create system directory fscache Feb 8 23:41:28.729719 kernel: Failed to create system directory fscache Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.612000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.736090 kernel: Failed to create system directory fscache Feb 8 23:41:28.739862 kernel: FS-Cache: Loaded Feb 8 23:41:28.612000 audit[3662]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556f701fa9c0 a1=4c0fc a2=556f6f7fc2b0 a3=5 items=0 ppid=50 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:28.612000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.811633 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.811724 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.811766 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.818217 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.818287 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.824449 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.824523 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.830529 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.830605 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.836706 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.836788 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.842569 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.842636 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.845477 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.848276 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.853990 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.854058 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.857770 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.862491 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.862563 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.868757 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.868842 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.876212 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.876289 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.880757 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.883760 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.890280 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.890343 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.897135 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.897197 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.903984 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.905422 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.907780 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.918733 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.919547 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.919970 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.925612 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.925678 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.932603 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.933557 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.939645 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.939722 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.945880 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.945948 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.951672 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.951762 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.958232 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.958284 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.965292 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.965909 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.970638 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.975702 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.975762 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.982665 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.982733 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.989211 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.989270 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.995845 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.995927 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.002448 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.002919 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.006297 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.013200 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.013300 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.019442 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.019515 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.025371 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.025470 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.030379 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.030458 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.035346 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.035410 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.037686 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.042675 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.042759 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.048315 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.048388 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.053733 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.053809 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.060106 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.060188 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.065677 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.065760 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.071155 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.079024 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.079095 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.079119 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.084405 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.084487 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.089654 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.089736 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.092368 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.094911 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.097645 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.100183 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.102697 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.105483 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.108135 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.110854 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.113473 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.116099 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.118895 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.121539 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.124077 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.127404 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.132342 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.132402 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.135792 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.141486 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.141532 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.148260 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.148359 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.151761 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.156347 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.156410 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.159803 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.161910 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.167283 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.167342 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.173329 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.173406 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.181265 kubelet[2016]: E0208 23:41:29.181200 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:29.187340 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.187370 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.187385 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.194007 kernel: Failed to create system directory sunrpc Feb 8 23:41:29.194910 kernel: Failed to create system directory sunrpc Feb 8 23:41:28.791000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.209084 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:41:29.209177 kernel: RPC: Registered udp transport module. Feb 8 23:41:29.209198 kernel: RPC: Registered tcp transport module. Feb 8 23:41:29.212455 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:41:28.791000 audit[3662]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556f70246ad0 a1=1588c4 a2=556f6f7fc2b0 a3=5 items=6 ppid=50 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:28.791000 audit: CWD cwd="/" Feb 8 23:41:28.791000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PATH item=1 name=(null) inode=27299 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PATH item=2 name=(null) inode=27299 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PATH item=3 name=(null) inode=27300 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PATH item=4 name=(null) inode=27299 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PATH item=5 name=(null) inode=27301 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:41:28.791000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.333780 kernel: Failed to create system directory nfs Feb 8 23:41:29.333881 kernel: Failed to create system directory nfs Feb 8 23:41:29.333908 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.338703 kernel: Failed to create system directory nfs Feb 8 23:41:29.338787 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.343992 kernel: Failed to create system directory nfs Feb 8 23:41:29.344064 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.349372 kernel: Failed to create system directory nfs Feb 8 23:41:29.349429 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.354485 kernel: Failed to create system directory nfs Feb 8 23:41:29.354540 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.360040 kernel: Failed to create system directory nfs Feb 8 23:41:29.360099 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.362644 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.365181 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.367560 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.370065 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.372662 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.375378 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.377823 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.380830 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.382753 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.387378 kernel: Failed to create system directory nfs Feb 8 23:41:29.387439 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.392426 kernel: Failed to create system directory nfs Feb 8 23:41:29.392490 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.394802 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.399903 kernel: Failed to create system directory nfs Feb 8 23:41:29.399975 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.405512 kernel: Failed to create system directory nfs Feb 8 23:41:29.405575 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.410627 kernel: Failed to create system directory nfs Feb 8 23:41:29.410695 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.415791 kernel: Failed to create system directory nfs Feb 8 23:41:29.415856 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.420623 kernel: Failed to create system directory nfs Feb 8 23:41:29.420664 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.425476 kernel: Failed to create system directory nfs Feb 8 23:41:29.425527 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.430134 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.432509 kernel: Failed to create system directory nfs Feb 8 23:41:29.432558 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.436946 kernel: Failed to create system directory nfs Feb 8 23:41:29.437005 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.441040 kernel: Failed to create system directory nfs Feb 8 23:41:29.441095 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.446252 kernel: Failed to create system directory nfs Feb 8 23:41:29.446302 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.448807 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.453519 kernel: Failed to create system directory nfs Feb 8 23:41:29.453571 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.458408 kernel: Failed to create system directory nfs Feb 8 23:41:29.458460 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.462979 kernel: Failed to create system directory nfs Feb 8 23:41:29.463031 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.467139 kernel: Failed to create system directory nfs Feb 8 23:41:29.467209 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.471471 kernel: Failed to create system directory nfs Feb 8 23:41:29.471550 kernel: Failed to create system directory nfs Feb 8 23:41:29.319000 audit[3662]: AVC avc: denied { confidentiality } for pid=3662 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.488121 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:41:29.319000 audit[3662]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556f703e9680 a1=e29dc a2=556f6f7fc2b0 a3=5 items=0 ppid=50 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:29.319000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.556564 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.556702 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.556750 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.562595 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.562666 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.567623 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.567675 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.572574 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.572654 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.577484 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.577543 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.582552 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.582621 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.587502 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.587568 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.592361 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.592422 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.598316 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.598423 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.603414 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.603489 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.608627 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.608695 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.613563 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.613634 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.618475 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.618541 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.623666 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.623728 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.628692 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.628772 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.633616 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.633666 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.638781 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.638846 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.643873 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.643942 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.648575 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.648638 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.653414 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.653464 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.659174 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.659241 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.663906 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.663972 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.668665 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.668731 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.673301 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.673369 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.678252 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.678308 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.682849 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.685544 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.685596 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.688985 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.693003 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.693054 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.697913 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.697981 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.702532 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.702580 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.707454 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.707507 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.713365 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.713438 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.718143 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.718240 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.722945 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.722999 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.728607 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.728660 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.733490 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.733542 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.738685 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.738757 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.743719 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.743790 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.748487 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.748540 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.754860 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.754931 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.760406 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.760475 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.765307 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.765996 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.768788 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.773055 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.773129 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.775450 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.777875 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.782496 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.782546 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.787192 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.787244 kernel: Failed to create system directory nfs4 Feb 8 23:41:29.536000 audit[3668]: AVC avc: denied { confidentiality } for pid=3668 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.932929 kernel: NFS: Registering the id_resolver key type Feb 8 23:41:29.933080 kernel: Key type id_resolver registered Feb 8 23:41:29.933107 kernel: Key type id_legacy registered Feb 8 23:41:29.536000 audit[3668]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f21a84f8010 a1=1d3cc4 a2=55a7dacc92b0 a3=5 items=0 ppid=50 pid=3668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:29.536000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.979959 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.980038 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.980072 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.985188 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.985277 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.990445 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.990516 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.995885 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.995952 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.001121 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.001185 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.006006 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.006066 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.011429 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.011494 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.016349 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.016415 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.018874 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.024085 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.024149 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.029313 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.029365 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.034413 kernel: Failed to create system directory rpcgss Feb 8 23:41:30.035171 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.037759 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: AVC avc: denied { confidentiality } for pid=3670 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 8 23:41:30.042440 kernel: Failed to create system directory rpcgss Feb 8 23:41:29.970000 audit[3670]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f49c2223010 a1=4f524 a2=55ff9b4c52b0 a3=5 items=0 ppid=50 pid=3670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:29.970000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 8 23:41:30.181584 kubelet[2016]: E0208 23:41:30.181536 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:30.229702 nfsidmap[3675]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-df5d74ad8f' Feb 8 23:41:30.234926 nfsidmap[3676]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-df5d74ad8f' Feb 8 23:41:30.245000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.245000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.245000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.245000 audit[1537]: AVC avc: denied { watch_reads } for pid=1537 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.245000 audit[1537]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55b78aa38aa0 a2=10 a3=2adfe819d6aa468a items=0 ppid=1 pid=1537 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.245000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 8 23:41:30.246000 audit[1537]: AVC avc: denied { watch_reads } for pid=1537 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.246000 audit[1537]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55b78aa38aa0 a2=10 a3=2adfe819d6aa468a items=0 ppid=1 pid=1537 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.246000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 8 23:41:30.246000 audit[1537]: AVC avc: denied { watch_reads } for pid=1537 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 8 23:41:30.246000 audit[1537]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55b78aa38aa0 a2=10 a3=2adfe819d6aa468a items=0 ppid=1 pid=1537 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.246000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 8 23:41:30.276237 env[1432]: time="2024-02-08T23:41:30.276172432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd4e7cc3-053c-4c5e-8aae-e516cad6ced0,Namespace:default,Attempt:0,}" Feb 8 23:41:30.448226 systemd-networkd[1578]: cali5ec59c6bf6e: Link UP Feb 8 23:41:30.455566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:41:30.455671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 8 23:41:30.455662 systemd-networkd[1578]: cali5ec59c6bf6e: Gained carrier Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.379 [INFO][3677] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.17-k8s-test--pod--1-eth0 default fd4e7cc3-053c-4c5e-8aae-e516cad6ced0 1427 0 2024-02-08 23:40:58 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.17 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.379 [INFO][3677] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.408 [INFO][3689] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" HandleID="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Workload="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.418 [INFO][3689] ipam_plugin.go 268: Auto assigning IP ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" HandleID="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Workload="10.200.8.17-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bb620), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.17", "pod":"test-pod-1", "timestamp":"2024-02-08 23:41:30.408355553 +0000 UTC"}, Hostname:"10.200.8.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.418 [INFO][3689] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.418 [INFO][3689] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.418 [INFO][3689] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.17' Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.419 [INFO][3689] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.423 [INFO][3689] ipam.go 372: Looking up existing affinities for host host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.426 [INFO][3689] ipam.go 489: Trying affinity for 192.168.120.192/26 host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.428 [INFO][3689] ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.430 [INFO][3689] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.430 [INFO][3689] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.431 [INFO][3689] ipam.go 1682: Creating new handle: k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0 Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.438 [INFO][3689] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.443 [INFO][3689] ipam.go 1216: Successfully claimed IPs: [192.168.120.196/26] block=192.168.120.192/26 handle="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.443 [INFO][3689] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.196/26] handle="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" host="10.200.8.17" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.443 [INFO][3689] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.443 [INFO][3689] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.120.196/26] IPv6=[] ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" HandleID="k8s-pod-network.2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Workload="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.444 [INFO][3677] k8s.go 385: Populated endpoint ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fd4e7cc3-053c-4c5e-8aae-e516cad6ced0", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:41:30.463308 env[1432]: 2024-02-08 23:41:30.444 [INFO][3677] k8s.go 386: Calico CNI using IPs: [192.168.120.196/32] ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.464432 env[1432]: 2024-02-08 23:41:30.444 [INFO][3677] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.464432 env[1432]: 2024-02-08 23:41:30.457 [INFO][3677] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.464432 env[1432]: 2024-02-08 23:41:30.457 [INFO][3677] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.17-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fd4e7cc3-053c-4c5e-8aae-e516cad6ced0", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 40, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.17", ContainerID:"2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"fa:ff:9f:11:fe:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:41:30.464432 env[1432]: 2024-02-08 23:41:30.461 [INFO][3677] k8s.go 491: Wrote updated endpoint to datastore ContainerID="2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.17-k8s-test--pod--1-eth0" Feb 8 23:41:30.477000 audit[3710]: NETFILTER_CFG table=filter:96 family=2 entries=38 op=nft_register_chain pid=3710 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:41:30.477000 audit[3710]: SYSCALL arch=c000003e syscall=46 success=yes exit=19080 a0=3 a1=7ffdf26dcf90 a2=0 a3=7ffdf26dcf7c items=0 ppid=2763 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:41:30.485724 env[1432]: time="2024-02-08T23:41:30.485593357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:41:30.485724 env[1432]: time="2024-02-08T23:41:30.485628757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:41:30.485724 env[1432]: time="2024-02-08T23:41:30.485646758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:41:30.486154 env[1432]: time="2024-02-08T23:41:30.486090459Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0 pid=3717 runtime=io.containerd.runc.v2 Feb 8 23:41:30.557631 env[1432]: time="2024-02-08T23:41:30.557577741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd4e7cc3-053c-4c5e-8aae-e516cad6ced0,Namespace:default,Attempt:0,} returns sandbox id \"2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0\"" Feb 8 23:41:30.559159 env[1432]: time="2024-02-08T23:41:30.559132947Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:41:31.125958 env[1432]: time="2024-02-08T23:41:31.125906578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:31.132073 env[1432]: time="2024-02-08T23:41:31.132022201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:31.137900 env[1432]: time="2024-02-08T23:41:31.137864124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:31.142462 env[1432]: time="2024-02-08T23:41:31.142410642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:31.143087 env[1432]: time="2024-02-08T23:41:31.143048445Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:41:31.145462 env[1432]: time="2024-02-08T23:41:31.145426754Z" level=info msg="CreateContainer within sandbox \"2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:41:31.177127 env[1432]: time="2024-02-08T23:41:31.177074778Z" level=info msg="CreateContainer within sandbox \"2091142799dc03b2c05fd7ecd14a37b28a3fc9e991d35543dd67e2f665a070d0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a82a37f8749a2a39dc1fff81447012fb00d377aa0c8684f83ddfba7a0766197b\"" Feb 8 23:41:31.177688 env[1432]: time="2024-02-08T23:41:31.177655280Z" level=info msg="StartContainer for \"a82a37f8749a2a39dc1fff81447012fb00d377aa0c8684f83ddfba7a0766197b\"" Feb 8 23:41:31.181860 kubelet[2016]: E0208 23:41:31.181798 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:31.227684 env[1432]: time="2024-02-08T23:41:31.227634776Z" level=info msg="StartContainer for \"a82a37f8749a2a39dc1fff81447012fb00d377aa0c8684f83ddfba7a0766197b\" returns successfully" Feb 8 23:41:31.736982 systemd-networkd[1578]: cali5ec59c6bf6e: Gained IPv6LL Feb 8 23:41:31.757558 systemd[1]: run-containerd-runc-k8s.io-7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0-runc.yIBxhd.mount: Deactivated successfully. Feb 8 23:41:32.182709 kubelet[2016]: E0208 23:41:32.182559 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:33.183609 kubelet[2016]: E0208 23:41:33.183546 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:34.184631 kubelet[2016]: E0208 23:41:34.184566 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:35.185250 kubelet[2016]: E0208 23:41:35.185186 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:36.186097 kubelet[2016]: E0208 23:41:36.185971 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:37.186249 kubelet[2016]: E0208 23:41:37.186184 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:38.187279 kubelet[2016]: E0208 23:41:38.187218 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:39.188049 kubelet[2016]: E0208 23:41:39.187994 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:40.189073 kubelet[2016]: E0208 23:41:40.189010 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:41.189759 kubelet[2016]: E0208 23:41:41.189681 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:42.190042 kubelet[2016]: E0208 23:41:42.189986 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:43.191022 kubelet[2016]: E0208 23:41:43.190965 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:44.191626 kubelet[2016]: E0208 23:41:44.191566 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:45.192760 kubelet[2016]: E0208 23:41:45.192683 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:46.115538 kubelet[2016]: E0208 23:41:46.115485 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:46.194256 kubelet[2016]: E0208 23:41:46.194210 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:47.194985 kubelet[2016]: E0208 23:41:47.194922 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:48.195150 kubelet[2016]: E0208 23:41:48.195091 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:49.195497 kubelet[2016]: E0208 23:41:49.195434 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:50.196357 kubelet[2016]: E0208 23:41:50.196317 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:51.196867 kubelet[2016]: E0208 23:41:51.196807 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:51.550648 kubelet[2016]: E0208 23:41:51.550522 2016 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.17?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 8 23:41:52.197475 kubelet[2016]: E0208 23:41:52.197415 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:53.198199 kubelet[2016]: E0208 23:41:53.198141 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:54.198564 kubelet[2016]: E0208 23:41:54.198507 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:55.198713 kubelet[2016]: E0208 23:41:55.198664 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:56.199092 kubelet[2016]: E0208 23:41:56.199039 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:57.199421 kubelet[2016]: E0208 23:41:57.199379 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:57.759278 kubelet[2016]: E0208 23:41:57.759233 2016 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:37480->10.200.8.23:2379: read: connection timed out Feb 8 23:41:58.200276 kubelet[2016]: E0208 23:41:58.200058 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:59.200239 kubelet[2016]: E0208 23:41:59.200181 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:00.200399 kubelet[2016]: E0208 23:42:00.200349 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:01.201314 kubelet[2016]: E0208 23:42:01.201261 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:01.742245 systemd[1]: run-containerd-runc-k8s.io-7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0-runc.Lgtgdu.mount: Deactivated successfully. Feb 8 23:42:02.201833 kubelet[2016]: E0208 23:42:02.201679 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:03.202065 kubelet[2016]: E0208 23:42:03.202014 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:04.202491 kubelet[2016]: E0208 23:42:04.202435 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:05.203401 kubelet[2016]: E0208 23:42:05.203341 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:06.115436 kubelet[2016]: E0208 23:42:06.115379 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:06.204238 kubelet[2016]: E0208 23:42:06.204174 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:07.205134 kubelet[2016]: E0208 23:42:07.205074 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:07.760056 kubelet[2016]: E0208 23:42:07.759999 2016 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.17?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 8 23:42:08.205522 kubelet[2016]: E0208 23:42:08.205375 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:09.206349 kubelet[2016]: E0208 23:42:09.206292 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:10.207222 kubelet[2016]: E0208 23:42:10.207165 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:11.207896 kubelet[2016]: E0208 23:42:11.207837 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:12.208705 kubelet[2016]: E0208 23:42:12.208665 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:13.209636 kubelet[2016]: E0208 23:42:13.209594 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:14.210046 kubelet[2016]: E0208 23:42:14.210005 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:15.210401 kubelet[2016]: E0208 23:42:15.210345 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:16.210869 kubelet[2016]: E0208 23:42:16.210814 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:17.211511 kubelet[2016]: E0208 23:42:17.211461 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:17.760872 kubelet[2016]: E0208 23:42:17.760820 2016 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.17?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 8 23:42:18.212213 kubelet[2016]: E0208 23:42:18.212174 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:19.212524 kubelet[2016]: E0208 23:42:19.212441 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:20.212907 kubelet[2016]: E0208 23:42:20.212849 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:21.213050 kubelet[2016]: E0208 23:42:21.212999 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:22.213815 kubelet[2016]: E0208 23:42:22.213759 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:23.213947 kubelet[2016]: E0208 23:42:23.213880 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:24.214066 kubelet[2016]: E0208 23:42:24.214005 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:25.214475 kubelet[2016]: E0208 23:42:25.214418 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:26.115564 kubelet[2016]: E0208 23:42:26.115513 2016 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:26.215283 kubelet[2016]: E0208 23:42:26.215223 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:27.215455 kubelet[2016]: E0208 23:42:27.215393 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:27.761292 kubelet[2016]: E0208 23:42:27.761232 2016 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.17?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 8 23:42:27.761292 kubelet[2016]: I0208 23:42:27.761293 2016 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 8 23:42:28.215706 kubelet[2016]: E0208 23:42:28.215664 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:29.216690 kubelet[2016]: E0208 23:42:29.216632 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:30.216925 kubelet[2016]: E0208 23:42:30.216860 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:31.217446 kubelet[2016]: E0208 23:42:31.217394 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:31.752329 systemd[1]: run-containerd-runc-k8s.io-7c26866f4130071c4a8b082e1623a961b4a79ccdd9aa73b4537470c79c7620b0-runc.wvolpm.mount: Deactivated successfully. Feb 8 23:42:32.217995 kubelet[2016]: E0208 23:42:32.217945 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:32.560262 kubelet[2016]: E0208 23:42:32.560068 2016 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.17\": Get \"https://10.200.8.4:6443/api/v1/nodes/10.200.8.17?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:42:33.218143 kubelet[2016]: E0208 23:42:33.218083 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:34.219255 kubelet[2016]: E0208 23:42:34.219200 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:35.220561 kubelet[2016]: E0208 23:42:35.220521 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:35.317775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.335405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.349341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.365720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.379784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.394359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.420584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.420923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.421041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.421148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.440759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.441071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.454982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.486588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.486733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.486870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.500806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.501164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.511647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.511981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.522374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.522719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.534960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.535252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.549794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.550167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.564508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.587510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.587698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.589003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.592176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.611391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.611776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.628818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.649974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.660986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.661185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.661299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.661409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.672766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.673112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.687075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.722591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.725010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.725142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.728957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.729320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.729438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.737171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.744267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.760575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.792311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.794988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.795366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.795950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.796116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.796228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.796338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.810429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.817665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.817818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.851483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.858074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.868814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.875833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.875967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.889768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.890079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.896801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.910727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.911051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.926310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.933899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.935253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.947647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.962986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.970552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.978322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.978463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:35.978576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.001494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.014689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.014860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.014973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.035306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.035627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.049685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.056895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.057162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.057327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.068060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.085207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.100007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.100132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.100244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.100349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.116992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.121062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.134536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.176745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.176980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.177094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.177202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.177312 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.177415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.200300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.209537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.241851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.242025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.242145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.242254 kubelet[2016]: E0208 23:42:36.220752 2016 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:36.259541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.266922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.297849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.362805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.369718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.372942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.373046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.394666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.395043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.395166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.414441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.414835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.440295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.440687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.452701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.453059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.471449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.501261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.503910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.504038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.504151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.504264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.504370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.504473 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.535409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.535805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.535935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.563467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.601007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.605439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.615295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.617768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.630224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.680582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.685624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.686500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.686618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.686728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.686848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:42:36.686954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001