Feb 9 19:04:48.034914 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:04:48.034945 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:48.034957 kernel: BIOS-provided physical RAM map: Feb 9 19:04:48.034968 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:04:48.034976 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:04:48.034985 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:04:48.034998 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:04:48.035008 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:04:48.035017 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:04:48.035027 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:04:48.035038 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:04:48.035048 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:04:48.035057 kernel: NX (Execute Disable) protection: active Feb 9 19:04:48.035072 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:04:48.035097 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:04:48.035107 kernel: random: crng init done Feb 9 19:04:48.035119 kernel: SMBIOS 3.1.0 present. Feb 9 19:04:48.035130 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:04:48.035144 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:04:48.035154 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:04:48.035164 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:04:48.035173 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:04:48.035186 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:04:48.035196 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:04:48.035207 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:04:48.035219 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:04:48.035230 kernel: tsc: Detected 2593.903 MHz processor Feb 9 19:04:48.035242 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:04:48.035254 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:04:48.035265 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:04:48.035276 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:04:48.035289 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:04:48.035303 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:04:48.035313 kernel: Using GB pages for direct mapping Feb 9 19:04:48.035325 kernel: Secure boot disabled Feb 9 19:04:48.035337 kernel: ACPI: Early table checksum verification disabled Feb 9 19:04:48.035348 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:04:48.035358 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035371 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035383 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:04:48.035401 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:04:48.035414 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035425 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035436 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035448 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035492 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035507 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035522 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:48.035533 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:04:48.035545 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:04:48.035556 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:04:48.035567 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:04:48.035583 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:04:48.035601 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:04:48.035615 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:04:48.035626 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:04:48.035637 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:04:48.035649 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:04:48.035660 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:04:48.035671 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:04:48.035683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:04:48.035695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:04:48.035707 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:04:48.035723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:04:48.035736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:04:48.035747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:04:48.035758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:04:48.035768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:04:48.035780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:04:48.035791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:04:48.035804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:04:48.035817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:04:48.035830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:04:48.035841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:04:48.035852 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:04:48.035863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:04:48.035877 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:04:48.035887 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 9 19:04:48.035899 kernel: Zone ranges: Feb 9 19:04:48.035911 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:04:48.035922 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:04:48.035936 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:48.035948 kernel: Movable zone start for each node Feb 9 19:04:48.035960 kernel: Early memory node ranges Feb 9 19:04:48.035972 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:04:48.035983 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:04:48.035996 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:04:48.036009 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:48.036022 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:04:48.036036 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:04:48.036052 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:04:48.036066 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:04:48.036079 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:04:48.036091 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:04:48.036104 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:04:48.036116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:04:48.036128 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:04:48.036142 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:04:48.036154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:04:48.036170 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:04:48.036182 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:04:48.036195 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:04:48.036208 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:04:48.036221 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:04:48.036233 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:04:48.036246 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:04:48.036259 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:04:48.036272 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:04:48.036288 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:04:48.036301 kernel: Policy zone: Normal Feb 9 19:04:48.036315 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:48.036329 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:04:48.036342 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:04:48.036355 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:04:48.036368 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:04:48.036381 kernel: Memory: 8081196K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306004K reserved, 0K cma-reserved) Feb 9 19:04:48.036398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:04:48.036411 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:04:48.036434 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:04:48.036476 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:04:48.036490 kernel: rcu: RCU event tracing is enabled. Feb 9 19:04:48.036503 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:04:48.036516 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:04:48.036530 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:04:48.036544 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:04:48.036558 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:04:48.036571 kernel: Using NULL legacy PIC Feb 9 19:04:48.036589 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:04:48.036602 kernel: Console: colour dummy device 80x25 Feb 9 19:04:48.036616 kernel: printk: console [tty1] enabled Feb 9 19:04:48.036629 kernel: printk: console [ttyS0] enabled Feb 9 19:04:48.036642 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:04:48.036659 kernel: ACPI: Core revision 20210730 Feb 9 19:04:48.036671 kernel: Failed to register legacy timer interrupt Feb 9 19:04:48.036685 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:04:48.036698 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:04:48.036710 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593903) Feb 9 19:04:48.036725 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:04:48.036738 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:04:48.036751 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:04:48.036765 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:04:48.036777 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:04:48.036794 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:04:48.036807 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:04:48.036820 kernel: RETBleed: Vulnerable Feb 9 19:04:48.036833 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:04:48.036846 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:48.036860 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:48.036873 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:04:48.036885 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:04:48.036899 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:04:48.036913 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:04:48.036930 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:04:48.036943 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:04:48.036956 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:04:48.036970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:04:48.036983 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:04:48.036995 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:04:48.037009 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:04:48.037022 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:04:48.037035 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:04:48.037048 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:04:48.037060 kernel: LSM: Security Framework initializing Feb 9 19:04:48.037073 kernel: SELinux: Initializing. Feb 9 19:04:48.037090 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:48.037104 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:48.037118 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:04:48.037131 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:04:48.037144 kernel: signal: max sigframe size: 3632 Feb 9 19:04:48.037157 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:04:48.037171 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:04:48.037184 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:04:48.037197 kernel: x86: Booting SMP configuration: Feb 9 19:04:48.037209 kernel: .... node #0, CPUs: #1 Feb 9 19:04:48.037224 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:04:48.037236 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:04:48.037249 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:04:48.037260 kernel: smpboot: Max logical packages: 1 Feb 9 19:04:48.037272 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Feb 9 19:04:48.037285 kernel: devtmpfs: initialized Feb 9 19:04:48.037297 kernel: x86/mm: Memory block size: 128MB Feb 9 19:04:48.037309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:04:48.037325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:04:48.037338 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:04:48.037351 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:04:48.037364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:04:48.037376 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:04:48.037390 kernel: audit: type=2000 audit(1707505486.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:04:48.037402 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:04:48.037415 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:04:48.037429 kernel: cpuidle: using governor menu Feb 9 19:04:48.037447 kernel: ACPI: bus type PCI registered Feb 9 19:04:48.037473 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:04:48.037486 kernel: dca service started, version 1.12.1 Feb 9 19:04:48.037501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:04:48.037515 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:04:48.037530 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:04:48.037544 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:04:48.037558 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:04:48.037573 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:04:48.037591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:04:48.037605 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:04:48.037619 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:04:48.037633 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:04:48.037647 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:04:48.037660 kernel: ACPI: Interpreter enabled Feb 9 19:04:48.037674 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:04:48.037687 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:04:48.037700 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:04:48.037714 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:04:48.037726 kernel: iommu: Default domain type: Translated Feb 9 19:04:48.037738 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:04:48.037750 kernel: vgaarb: loaded Feb 9 19:04:48.037762 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:04:48.037773 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:04:48.037785 kernel: PTP clock support registered Feb 9 19:04:48.037798 kernel: Registered efivars operations Feb 9 19:04:48.037811 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:04:48.037824 kernel: PCI: System does not support PCI Feb 9 19:04:48.037840 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:04:48.037853 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:04:48.037865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:04:48.037877 kernel: pnp: PnP ACPI init Feb 9 19:04:48.037889 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:04:48.037900 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:04:48.037913 kernel: NET: Registered PF_INET protocol family Feb 9 19:04:48.037925 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:04:48.037939 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:04:48.037952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:04:48.037964 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:04:48.037976 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:04:48.037988 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:04:48.038000 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:48.038012 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:48.038024 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:04:48.038037 kernel: NET: Registered PF_XDP protocol family Feb 9 19:04:48.038051 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:04:48.038063 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:04:48.038075 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:04:48.038088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:04:48.038100 kernel: Initialise system trusted keyrings Feb 9 19:04:48.038113 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:04:48.038126 kernel: Key type asymmetric registered Feb 9 19:04:48.038138 kernel: Asymmetric key parser 'x509' registered Feb 9 19:04:48.038149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:04:48.038164 kernel: io scheduler mq-deadline registered Feb 9 19:04:48.038176 kernel: io scheduler kyber registered Feb 9 19:04:48.038188 kernel: io scheduler bfq registered Feb 9 19:04:48.038200 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:04:48.038213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:04:48.038226 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:04:48.038239 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:04:48.038252 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:04:48.038420 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:04:48.038561 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:04:47 UTC (1707505487) Feb 9 19:04:48.038666 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:04:48.038682 kernel: fail to initialize ptp_kvm Feb 9 19:04:48.038696 kernel: intel_pstate: CPU model not supported Feb 9 19:04:48.038709 kernel: efifb: probing for efifb Feb 9 19:04:48.038722 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:04:48.038735 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:04:48.038748 kernel: efifb: scrolling: redraw Feb 9 19:04:48.038765 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:04:48.038778 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:04:48.038791 kernel: fb0: EFI VGA frame buffer device Feb 9 19:04:48.038803 kernel: pstore: Registered efi as persistent store backend Feb 9 19:04:48.038816 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:04:48.038829 kernel: Segment Routing with IPv6 Feb 9 19:04:48.038843 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:04:48.038856 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:04:48.038869 kernel: Key type dns_resolver registered Feb 9 19:04:48.038884 kernel: IPI shorthand broadcast: enabled Feb 9 19:04:48.038896 kernel: sched_clock: Marking stable (786688000, 23567200)->(1014748500, -204493300) Feb 9 19:04:48.038909 kernel: registered taskstats version 1 Feb 9 19:04:48.038922 kernel: Loading compiled-in X.509 certificates Feb 9 19:04:48.038935 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:04:48.038949 kernel: Key type .fscrypt registered Feb 9 19:04:48.038962 kernel: Key type fscrypt-provisioning registered Feb 9 19:04:48.038975 kernel: pstore: Using crash dump compression: deflate Feb 9 19:04:48.038991 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:04:48.039005 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:04:48.039018 kernel: ima: No architecture policies found Feb 9 19:04:48.039031 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:04:48.039044 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:04:48.039058 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:04:48.039071 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:04:48.039084 kernel: Run /init as init process Feb 9 19:04:48.039096 kernel: with arguments: Feb 9 19:04:48.039110 kernel: /init Feb 9 19:04:48.039125 kernel: with environment: Feb 9 19:04:48.039138 kernel: HOME=/ Feb 9 19:04:48.039150 kernel: TERM=linux Feb 9 19:04:48.039162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:04:48.039178 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:04:48.039195 systemd[1]: Detected virtualization microsoft. Feb 9 19:04:48.039210 systemd[1]: Detected architecture x86-64. Feb 9 19:04:48.039228 systemd[1]: Running in initrd. Feb 9 19:04:48.039243 systemd[1]: No hostname configured, using default hostname. Feb 9 19:04:48.039256 systemd[1]: Hostname set to . Feb 9 19:04:48.039270 systemd[1]: Initializing machine ID from random generator. Feb 9 19:04:48.039283 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:04:48.039295 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:04:48.039308 systemd[1]: Reached target cryptsetup.target. Feb 9 19:04:48.039320 systemd[1]: Reached target paths.target. Feb 9 19:04:48.039333 systemd[1]: Reached target slices.target. Feb 9 19:04:48.039348 systemd[1]: Reached target swap.target. Feb 9 19:04:48.039361 systemd[1]: Reached target timers.target. Feb 9 19:04:48.039375 systemd[1]: Listening on iscsid.socket. Feb 9 19:04:48.039388 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:04:48.039402 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:04:48.039415 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:04:48.039429 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:04:48.039445 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:04:48.039473 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:04:48.039492 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:04:48.039506 systemd[1]: Reached target sockets.target. Feb 9 19:04:48.039520 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:04:48.039532 systemd[1]: Finished network-cleanup.service. Feb 9 19:04:48.039545 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:04:48.039559 systemd[1]: Starting systemd-journald.service... Feb 9 19:04:48.039572 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:04:48.039589 systemd[1]: Starting systemd-resolved.service... Feb 9 19:04:48.039602 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:04:48.039620 systemd-journald[183]: Journal started Feb 9 19:04:48.039687 systemd-journald[183]: Runtime Journal (/run/log/journal/f401c52ddaed40cfa51390571b4239df) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:04:48.033564 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:04:48.056694 systemd[1]: Started systemd-journald.service. Feb 9 19:04:48.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.057120 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:04:48.072481 kernel: audit: type=1130 audit(1707505488.056:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.073028 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:04:48.096127 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:04:48.096159 kernel: Bridge firewalling registered Feb 9 19:04:48.074121 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:04:48.075261 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:04:48.076376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:04:48.096851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:04:48.106541 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:04:48.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.135468 kernel: audit: type=1130 audit(1707505488.072:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.129410 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:04:48.132446 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:04:48.143470 kernel: SCSI subsystem initialized Feb 9 19:04:48.145725 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:04:48.148058 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:04:48.151759 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:04:48.153822 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:04:48.167736 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:48.191367 kernel: audit: type=1130 audit(1707505488.073:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.194899 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:04:48.248085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:04:48.248126 kernel: audit: type=1130 audit(1707505488.073:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.248147 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:04:48.248163 kernel: audit: type=1130 audit(1707505488.103:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.248182 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:04:48.248198 kernel: audit: type=1130 audit(1707505488.131:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.200710 systemd[1]: Started systemd-resolved.service. Feb 9 19:04:48.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.252523 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:04:48.268879 kernel: audit: type=1130 audit(1707505488.252:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.252709 systemd[1]: Reached target nss-lookup.target. Feb 9 19:04:48.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.266461 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:04:48.288496 kernel: audit: type=1130 audit(1707505488.270:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.272360 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:48.296278 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:48.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.312468 kernel: audit: type=1130 audit(1707505488.297:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.334469 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:04:48.347469 kernel: iscsi: registered transport (tcp) Feb 9 19:04:48.372638 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:04:48.372678 kernel: QLogic iSCSI HBA Driver Feb 9 19:04:48.401807 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:04:48.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.406669 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:04:48.456472 kernel: raid6: avx512x4 gen() 18219 MB/s Feb 9 19:04:48.476466 kernel: raid6: avx512x4 xor() 7031 MB/s Feb 9 19:04:48.496463 kernel: raid6: avx512x2 gen() 18353 MB/s Feb 9 19:04:48.516471 kernel: raid6: avx512x2 xor() 29942 MB/s Feb 9 19:04:48.536463 kernel: raid6: avx512x1 gen() 18300 MB/s Feb 9 19:04:48.556463 kernel: raid6: avx512x1 xor() 27526 MB/s Feb 9 19:04:48.576465 kernel: raid6: avx2x4 gen() 18200 MB/s Feb 9 19:04:48.596464 kernel: raid6: avx2x4 xor() 6779 MB/s Feb 9 19:04:48.616462 kernel: raid6: avx2x2 gen() 18341 MB/s Feb 9 19:04:48.636467 kernel: raid6: avx2x2 xor() 22386 MB/s Feb 9 19:04:48.656463 kernel: raid6: avx2x1 gen() 13658 MB/s Feb 9 19:04:48.676464 kernel: raid6: avx2x1 xor() 19436 MB/s Feb 9 19:04:48.696466 kernel: raid6: sse2x4 gen() 11695 MB/s Feb 9 19:04:48.716462 kernel: raid6: sse2x4 xor() 6561 MB/s Feb 9 19:04:48.736463 kernel: raid6: sse2x2 gen() 12883 MB/s Feb 9 19:04:48.757465 kernel: raid6: sse2x2 xor() 7475 MB/s Feb 9 19:04:48.777465 kernel: raid6: sse2x1 gen() 11708 MB/s Feb 9 19:04:48.800463 kernel: raid6: sse2x1 xor() 5932 MB/s Feb 9 19:04:48.800492 kernel: raid6: using algorithm avx512x2 gen() 18353 MB/s Feb 9 19:04:48.800502 kernel: raid6: .... xor() 29942 MB/s, rmw enabled Feb 9 19:04:48.803906 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:04:48.823476 kernel: xor: automatically using best checksumming function avx Feb 9 19:04:48.919479 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:04:48.927581 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:04:48.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.931000 audit: BPF prog-id=7 op=LOAD Feb 9 19:04:48.931000 audit: BPF prog-id=8 op=LOAD Feb 9 19:04:48.932579 systemd[1]: Starting systemd-udevd.service... Feb 9 19:04:48.946487 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 9 19:04:48.951138 systemd[1]: Started systemd-udevd.service. Feb 9 19:04:48.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:48.961774 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:04:48.976364 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 9 19:04:49.006321 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:04:49.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:49.009754 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:04:49.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:49.045227 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:04:49.087466 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:04:49.125521 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:04:49.125563 kernel: AES CTR mode by8 optimization enabled Feb 9 19:04:49.128505 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:04:49.139466 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:04:49.139507 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:04:49.150471 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:04:49.159470 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:04:49.160470 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:04:49.177901 kernel: scsi host1: storvsc_host_t Feb 9 19:04:49.178077 kernel: scsi host0: storvsc_host_t Feb 9 19:04:49.199712 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:04:49.199772 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:04:49.209465 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:04:49.222064 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:04:49.222096 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:04:49.243086 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:04:49.243314 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:04:49.249466 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:04:49.249639 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:04:49.249763 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:04:49.261526 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:04:49.261723 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:04:49.261825 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:04:49.266471 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:49.271218 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:04:49.305484 kernel: hv_netvsc 0022489d-990c-0022-489d-990c0022489d eth0: VF slot 1 added Feb 9 19:04:49.320681 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:04:49.320722 kernel: hv_pci 776c5b67-1fb3-4cf3-9ac6-9ae7b14136ad: PCI VMBus probing: Using version 0x10004 Feb 9 19:04:49.334146 kernel: hv_pci 776c5b67-1fb3-4cf3-9ac6-9ae7b14136ad: PCI host bridge to bus 1fb3:00 Feb 9 19:04:49.334314 kernel: pci_bus 1fb3:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:04:49.334446 kernel: pci_bus 1fb3:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:04:49.344529 kernel: pci 1fb3:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:04:49.355101 kernel: pci 1fb3:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:49.371789 kernel: pci 1fb3:00:02.0: enabling Extended Tags Feb 9 19:04:49.391290 kernel: pci 1fb3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1fb3:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:04:49.391502 kernel: pci_bus 1fb3:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:04:49.391617 kernel: pci 1fb3:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:49.491474 kernel: mlx5_core 1fb3:00:02.0: firmware version: 14.30.1350 Feb 9 19:04:49.652480 kernel: mlx5_core 1fb3:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:04:49.783011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:04:49.802009 kernel: mlx5_core 1fb3:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:04:49.802199 kernel: mlx5_core 1fb3:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Feb 9 19:04:49.813745 kernel: hv_netvsc 0022489d-990c-0022-489d-990c0022489d eth0: VF registering: eth1 Feb 9 19:04:49.813898 kernel: mlx5_core 1fb3:00:02.0 eth1: joined to eth0 Feb 9 19:04:49.826471 kernel: mlx5_core 1fb3:00:02.0 enP8115s1: renamed from eth1 Feb 9 19:04:49.864474 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (437) Feb 9 19:04:49.877744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:04:50.037199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:04:50.097139 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:04:50.105141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:04:50.108749 systemd[1]: Starting disk-uuid.service... Feb 9 19:04:50.125469 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:50.133476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:51.140959 disk-uuid[562]: The operation has completed successfully. Feb 9 19:04:51.143818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:51.214779 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:04:51.214894 systemd[1]: Finished disk-uuid.service. Feb 9 19:04:51.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.233581 systemd[1]: Starting verity-setup.service... Feb 9 19:04:51.273471 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:04:51.585720 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:04:51.592652 systemd[1]: Finished verity-setup.service. Feb 9 19:04:51.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.597193 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:04:51.670493 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:04:51.669930 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:04:51.674045 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:04:51.678279 systemd[1]: Starting ignition-setup.service... Feb 9 19:04:51.683650 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:04:51.702482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:51.702525 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:51.702543 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:51.748041 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:04:51.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.753000 audit: BPF prog-id=9 op=LOAD Feb 9 19:04:51.754348 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:51.778802 systemd-networkd[800]: lo: Link UP Feb 9 19:04:51.778811 systemd-networkd[800]: lo: Gained carrier Feb 9 19:04:51.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.779707 systemd-networkd[800]: Enumeration completed Feb 9 19:04:51.779776 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:51.783584 systemd[1]: Reached target network.target. Feb 9 19:04:51.786721 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:51.789559 systemd[1]: Starting iscsiuio.service... Feb 9 19:04:51.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.800063 systemd[1]: Started iscsiuio.service. Feb 9 19:04:51.806332 systemd[1]: Starting iscsid.service... Feb 9 19:04:51.812062 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:04:51.815749 iscsid[811]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:51.815749 iscsid[811]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:04:51.815749 iscsid[811]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:04:51.815749 iscsid[811]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:04:51.840378 iscsid[811]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:51.840378 iscsid[811]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:04:51.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.836587 systemd[1]: Started iscsid.service. Feb 9 19:04:51.841203 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:04:51.854792 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:04:51.864903 kernel: mlx5_core 1fb3:00:02.0 enP8115s1: Link up Feb 9 19:04:51.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.859938 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:04:51.864889 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:04:51.867171 systemd[1]: Reached target remote-fs.target. Feb 9 19:04:51.869955 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:04:51.881430 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:04:51.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.891867 systemd[1]: Finished ignition-setup.service. Feb 9 19:04:51.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:51.896812 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:04:51.941480 kernel: hv_netvsc 0022489d-990c-0022-489d-990c0022489d eth0: Data path switched to VF: enP8115s1 Feb 9 19:04:51.941738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:51.946988 systemd-networkd[800]: enP8115s1: Link UP Feb 9 19:04:51.947129 systemd-networkd[800]: eth0: Link UP Feb 9 19:04:51.947334 systemd-networkd[800]: eth0: Gained carrier Feb 9 19:04:51.953975 systemd-networkd[800]: enP8115s1: Gained carrier Feb 9 19:04:51.986532 systemd-networkd[800]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:53.869678 systemd-networkd[800]: eth0: Gained IPv6LL Feb 9 19:04:55.237999 ignition[827]: Ignition 2.14.0 Feb 9 19:04:55.238018 ignition[827]: Stage: fetch-offline Feb 9 19:04:55.238113 ignition[827]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:55.238173 ignition[827]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:55.332530 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:55.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.334147 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:04:55.358148 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:04:55.358173 kernel: audit: type=1130 audit(1707505495.338:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.332742 ignition[827]: parsed url from cmdline: "" Feb 9 19:04:55.340010 systemd[1]: Starting ignition-fetch.service... Feb 9 19:04:55.332749 ignition[827]: no config URL provided Feb 9 19:04:55.332758 ignition[827]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:55.332769 ignition[827]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:55.332778 ignition[827]: failed to fetch config: resource requires networking Feb 9 19:04:55.333057 ignition[827]: Ignition finished successfully Feb 9 19:04:55.348531 ignition[833]: Ignition 2.14.0 Feb 9 19:04:55.348537 ignition[833]: Stage: fetch Feb 9 19:04:55.348671 ignition[833]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:55.348697 ignition[833]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:55.370832 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:55.370971 ignition[833]: parsed url from cmdline: "" Feb 9 19:04:55.370974 ignition[833]: no config URL provided Feb 9 19:04:55.370979 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:55.370986 ignition[833]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:55.371019 ignition[833]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:04:55.446091 ignition[833]: GET result: OK Feb 9 19:04:55.446196 ignition[833]: config has been read from IMDS userdata Feb 9 19:04:55.446234 ignition[833]: parsing config with SHA512: 6a774fdefcac3aa983a4fc8c6d2771afe39ada58ca30600d860539b302e6780b8c65019d1684e6ca8b29c379f29c5dcde21c8eb87dc81dc31e90ba84a7b25a4e Feb 9 19:04:55.466409 unknown[833]: fetched base config from "system" Feb 9 19:04:55.467697 unknown[833]: fetched base config from "system" Feb 9 19:04:55.468395 ignition[833]: fetch: fetch complete Feb 9 19:04:55.467707 unknown[833]: fetched user config from "azure" Feb 9 19:04:55.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.468404 ignition[833]: fetch: fetch passed Feb 9 19:04:55.493752 kernel: audit: type=1130 audit(1707505495.474:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.472776 systemd[1]: Finished ignition-fetch.service. Feb 9 19:04:55.468446 ignition[833]: Ignition finished successfully Feb 9 19:04:55.475848 systemd[1]: Starting ignition-kargs.service... Feb 9 19:04:55.498987 ignition[839]: Ignition 2.14.0 Feb 9 19:04:55.498992 ignition[839]: Stage: kargs Feb 9 19:04:55.499100 ignition[839]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:55.522571 kernel: audit: type=1130 audit(1707505495.509:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.507461 systemd[1]: Finished ignition-kargs.service. Feb 9 19:04:55.499126 ignition[839]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:55.523623 systemd[1]: Starting ignition-disks.service... Feb 9 19:04:55.504020 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:55.505963 ignition[839]: kargs: kargs passed Feb 9 19:04:55.506012 ignition[839]: Ignition finished successfully Feb 9 19:04:55.535523 ignition[845]: Ignition 2.14.0 Feb 9 19:04:55.535716 ignition[845]: Stage: disks Feb 9 19:04:55.535851 ignition[845]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:55.535895 ignition[845]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:55.545516 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:55.550834 ignition[845]: disks: disks passed Feb 9 19:04:55.550887 ignition[845]: Ignition finished successfully Feb 9 19:04:55.554129 systemd[1]: Finished ignition-disks.service. Feb 9 19:04:55.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.557036 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:04:55.581069 kernel: audit: type=1130 audit(1707505495.556:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.570377 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:04:55.570733 systemd[1]: Reached target local-fs.target. Feb 9 19:04:55.571091 systemd[1]: Reached target sysinit.target. Feb 9 19:04:55.571475 systemd[1]: Reached target basic.target. Feb 9 19:04:55.578825 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:04:55.639172 systemd-fsck[853]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:04:55.644504 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:04:55.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.662129 systemd[1]: Mounting sysroot.mount... Feb 9 19:04:55.663408 kernel: audit: type=1130 audit(1707505495.649:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:55.695467 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:04:55.696161 systemd[1]: Mounted sysroot.mount. Feb 9 19:04:55.700031 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:04:55.739007 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:04:55.744996 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:04:55.749535 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:04:55.749572 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:04:55.758883 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:04:55.794241 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:55.797826 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:04:55.815474 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (864) Feb 9 19:04:55.815515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:55.824471 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:55.824506 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:55.824625 initrd-setup-root[869]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:04:55.834731 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:55.845628 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:04:55.850473 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:04:55.855279 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:04:56.302651 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:04:56.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:56.308680 systemd[1]: Starting ignition-mount.service... Feb 9 19:04:56.325408 kernel: audit: type=1130 audit(1707505496.307:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:56.321716 systemd[1]: Starting sysroot-boot.service... Feb 9 19:04:56.323630 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:56.323749 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:56.348713 ignition[930]: INFO : Ignition 2.14.0 Feb 9 19:04:56.351332 ignition[930]: INFO : Stage: mount Feb 9 19:04:56.355096 ignition[930]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:56.355096 ignition[930]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:56.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:56.354773 systemd[1]: Finished sysroot-boot.service. Feb 9 19:04:56.374179 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:56.374179 ignition[930]: INFO : mount: mount passed Feb 9 19:04:56.374179 ignition[930]: INFO : Ignition finished successfully Feb 9 19:04:56.370135 systemd[1]: Finished ignition-mount.service. Feb 9 19:04:56.388877 kernel: audit: type=1130 audit(1707505496.359:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:56.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:56.404487 kernel: audit: type=1130 audit(1707505496.390:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:57.357038 coreos-metadata[863]: Feb 09 19:04:57.356 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:04:57.373815 coreos-metadata[863]: Feb 09 19:04:57.373 INFO Fetch successful Feb 9 19:04:57.406543 coreos-metadata[863]: Feb 09 19:04:57.406 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:04:57.426915 coreos-metadata[863]: Feb 09 19:04:57.426 INFO Fetch successful Feb 9 19:04:57.442498 coreos-metadata[863]: Feb 09 19:04:57.442 INFO wrote hostname ci-3510.3.2-a-2a68512ec5 to /sysroot/etc/hostname Feb 9 19:04:57.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:57.444422 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:04:57.465639 kernel: audit: type=1130 audit(1707505497.449:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:57.450544 systemd[1]: Starting ignition-files.service... Feb 9 19:04:57.468870 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:57.484470 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (942) Feb 9 19:04:57.484502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:57.492135 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:57.492154 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:57.500330 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:57.514392 ignition[961]: INFO : Ignition 2.14.0 Feb 9 19:04:57.516742 ignition[961]: INFO : Stage: files Feb 9 19:04:57.516742 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:57.516742 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:57.530048 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:57.543781 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:04:57.546884 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:04:57.546884 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:04:57.663756 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:04:57.667279 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:04:57.667279 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:04:57.667279 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:04:57.667279 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:04:57.667279 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:57.667279 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:04:57.664355 unknown[961]: wrote ssh authorized keys file for user: core Feb 9 19:04:58.326684 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:04:58.502391 ignition[961]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:04:58.510300 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:58.510300 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:58.510300 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:04:58.993822 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:04:59.079707 ignition[961]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:04:59.087610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:59.092742 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:59.097119 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:04:59.323589 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:04:59.698133 ignition[961]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:04:59.707264 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:59.707264 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:04:59.707264 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:04:59.828558 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:05:00.522915 ignition[961]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:05:00.532206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:05:00.566761 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:05:00.566761 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:05:00.566761 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:05:00.590773 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (964) Feb 9 19:05:00.590815 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2782336374" Feb 9 19:05:00.590815 ignition[961]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2782336374": device or resource busy Feb 9 19:05:00.590815 ignition[961]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2782336374", trying btrfs: device or resource busy Feb 9 19:05:00.590815 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2782336374" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2782336374" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2782336374" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2782336374" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem502175512" Feb 9 19:05:00.613651 ignition[961]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem502175512": device or resource busy Feb 9 19:05:00.613651 ignition[961]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem502175512", trying btrfs: device or resource busy Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem502175512" Feb 9 19:05:00.613651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem502175512" Feb 9 19:05:00.745505 kernel: audit: type=1130 audit(1707505500.629:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.745539 kernel: audit: type=1130 audit(1707505500.656:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.745557 kernel: audit: type=1131 audit(1707505500.656:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.594784 systemd[1]: mnt-oem2782336374.mount: Deactivated successfully. Feb 9 19:05:00.748262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem502175512" Feb 9 19:05:00.748262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem502175512" Feb 9 19:05:00.748262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(13): [started] processing unit "waagent.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(13): [finished] processing unit "waagent.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(19): [started] processing unit "containerd.service" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(19): op(1a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(19): op(1a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:05:00.748262 ignition[961]: INFO : files: op(19): [finished] processing unit "containerd.service" Feb 9 19:05:00.809560 kernel: audit: type=1130 audit(1707505500.719:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.809591 kernel: audit: type=1130 audit(1707505500.754:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.809606 kernel: audit: type=1131 audit(1707505500.754:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.618369 systemd[1]: mnt-oem502175512.mount: Deactivated successfully. Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:05:00.810019 ignition[961]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:05:00.810019 ignition[961]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:05:00.810019 ignition[961]: INFO : files: files passed Feb 9 19:05:00.810019 ignition[961]: INFO : Ignition finished successfully Feb 9 19:05:00.626543 systemd[1]: Finished ignition-files.service. Feb 9 19:05:00.823914 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:05:00.642647 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:05:00.646558 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:05:00.647583 systemd[1]: Starting ignition-quench.service... Feb 9 19:05:00.651188 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:05:00.651266 systemd[1]: Finished ignition-quench.service. Feb 9 19:05:00.713051 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:05:00.720111 systemd[1]: Reached target ignition-complete.target. Feb 9 19:05:00.728420 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:05:00.750764 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:05:00.750853 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:05:00.754647 systemd[1]: Reached target initrd-fs.target. Feb 9 19:05:00.796494 systemd[1]: Reached target initrd.target. Feb 9 19:05:00.947658 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:05:00.951730 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:05:00.963265 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:05:00.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.968024 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:05:00.986786 kernel: audit: type=1130 audit(1707505500.966:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:00.993377 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:05:00.995824 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:05:01.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.001264 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:05:01.030586 kernel: audit: type=1130 audit(1707505501.000:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.030623 kernel: audit: type=1131 audit(1707505501.000:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.028336 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:05:01.030608 systemd[1]: Stopped target timers.target. Feb 9 19:05:01.032708 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:05:01.032765 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:05:01.056832 kernel: audit: type=1131 audit(1707505501.041:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.052390 systemd[1]: Stopped target initrd.target. Feb 9 19:05:01.056816 systemd[1]: Stopped target basic.target. Feb 9 19:05:01.059572 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:05:01.063492 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:05:01.066966 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:05:01.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.069116 systemd[1]: Stopped target remote-fs.target. Feb 9 19:05:01.072853 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:05:01.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.074978 systemd[1]: Stopped target sysinit.target. Feb 9 19:05:01.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.078981 systemd[1]: Stopped target local-fs.target. Feb 9 19:05:01.081050 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:05:01.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.085220 systemd[1]: Stopped target swap.target. Feb 9 19:05:01.129492 ignition[999]: INFO : Ignition 2.14.0 Feb 9 19:05:01.129492 ignition[999]: INFO : Stage: umount Feb 9 19:05:01.129492 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:05:01.129492 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:05:01.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.086996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:05:01.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.149104 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:05:01.149104 ignition[999]: INFO : umount: umount passed Feb 9 19:05:01.149104 ignition[999]: INFO : Ignition finished successfully Feb 9 19:05:01.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.087059 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:05:01.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.091184 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:05:01.093156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:05:01.093201 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:05:01.097639 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:05:01.097681 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:05:01.101978 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:05:01.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.102027 systemd[1]: Stopped ignition-files.service. Feb 9 19:05:01.104064 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:05:01.104103 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:05:01.108888 systemd[1]: Stopping ignition-mount.service... Feb 9 19:05:01.111967 systemd[1]: Stopping iscsiuio.service... Feb 9 19:05:01.114908 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:05:01.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.117740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:05:01.117817 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:05:01.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.120217 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:05:01.120267 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:05:01.221000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:05:01.122995 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:05:01.123123 systemd[1]: Stopped iscsiuio.service. Feb 9 19:05:01.136676 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:05:01.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.136764 systemd[1]: Stopped ignition-mount.service. Feb 9 19:05:01.138904 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:05:01.138947 systemd[1]: Stopped ignition-disks.service. Feb 9 19:05:01.149025 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:05:01.149073 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:05:01.155795 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:05:01.155854 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:05:01.160170 systemd[1]: Stopped target network.target. Feb 9 19:05:01.162166 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:05:01.162210 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:05:01.166137 systemd[1]: Stopped target paths.target. Feb 9 19:05:01.168052 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:05:01.170494 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:05:01.174790 systemd[1]: Stopped target slices.target. Feb 9 19:05:01.176725 systemd[1]: Stopped target sockets.target. Feb 9 19:05:01.180539 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:05:01.180575 systemd[1]: Closed iscsid.socket. Feb 9 19:05:01.184916 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:05:01.184958 systemd[1]: Closed iscsiuio.socket. Feb 9 19:05:01.189127 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:05:01.189176 systemd[1]: Stopped ignition-setup.service. Feb 9 19:05:01.193900 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:05:01.197769 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:05:01.204501 systemd-networkd[800]: eth0: DHCPv6 lease lost Feb 9 19:05:01.244000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:05:01.206340 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:05:01.206602 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:05:01.215496 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:05:01.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.215591 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:05:01.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.222115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:05:01.222154 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:05:01.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.226780 systemd[1]: Stopping network-cleanup.service... Feb 9 19:05:01.230165 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:05:01.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.230223 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:05:01.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.234120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:05:01.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.234170 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:05:01.290064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:05:01.290111 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:05:01.292314 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:05:01.297462 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:05:01.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.300061 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:05:01.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.300197 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:05:01.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.305601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:05:01.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.305640 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:05:01.307881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:05:01.307921 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:05:01.310093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:05:01.310129 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:05:01.314574 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:05:01.314620 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:05:01.319005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:05:01.319051 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:05:01.380694 kernel: hv_netvsc 0022489d-990c-0022-489d-990c0022489d eth0: Data path switched from VF: enP8115s1 Feb 9 19:05:01.324235 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:05:01.334575 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:05:01.334636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:05:01.340122 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:05:01.340167 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:05:01.344587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:05:01.344634 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:05:01.349122 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:05:01.349210 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:05:01.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:01.401397 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:05:01.401506 systemd[1]: Stopped network-cleanup.service. Feb 9 19:05:01.591759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:05:02.132428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:05:02.203964 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:05:02.204104 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:05:02.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:02.210845 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:05:02.215550 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:05:02.215627 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:05:02.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:02.221337 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:05:02.231000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:05:02.231000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:05:02.232000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:05:02.232000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:05:02.232000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:05:02.229829 systemd[1]: Switching root. Feb 9 19:05:02.257552 iscsid[811]: iscsid shutting down. Feb 9 19:05:02.259508 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 9 19:05:02.259571 systemd-journald[183]: Journal stopped Feb 9 19:05:18.264022 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:05:18.264050 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:05:18.264062 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:05:18.264071 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:05:18.264081 kernel: SELinux: policy capability open_perms=1 Feb 9 19:05:18.264091 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:05:18.264102 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:05:18.264112 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:05:18.264120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:05:18.264131 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:05:18.264139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:05:18.264150 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 9 19:05:18.264158 kernel: audit: type=1403 audit(1707505505.969:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:05:18.264170 systemd[1]: Successfully loaded SELinux policy in 259.864ms. Feb 9 19:05:18.264185 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.890ms. Feb 9 19:05:18.264197 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:05:18.264207 systemd[1]: Detected virtualization microsoft. Feb 9 19:05:18.264219 systemd[1]: Detected architecture x86-64. Feb 9 19:05:18.264229 systemd[1]: Detected first boot. Feb 9 19:05:18.264242 systemd[1]: Hostname set to . Feb 9 19:05:18.264252 systemd[1]: Initializing machine ID from random generator. Feb 9 19:05:18.264264 kernel: audit: type=1400 audit(1707505506.813:88): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:05:18.264277 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:05:18.264286 kernel: audit: type=1400 audit(1707505508.381:89): avc: denied { associate } for pid=1051 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:05:18.264298 kernel: audit: type=1300 audit(1707505508.381:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001096b2 a1=c00002cb58 a2=c00002aa40 a3=32 items=0 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:18.264311 kernel: audit: type=1327 audit(1707505508.381:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:05:18.264323 kernel: audit: type=1400 audit(1707505508.389:90): avc: denied { associate } for pid=1051 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:05:18.264333 kernel: audit: type=1300 audit(1707505508.389:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000109789 a2=1ed a3=0 items=2 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:18.264344 kernel: audit: type=1307 audit(1707505508.389:90): cwd="/" Feb 9 19:05:18.264353 kernel: audit: type=1302 audit(1707505508.389:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:18.264366 kernel: audit: type=1302 audit(1707505508.389:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:18.264376 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:05:18.264389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:05:18.264401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:05:18.264412 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:05:18.264423 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:05:18.264433 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:05:18.271766 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:05:18.271802 systemd[1]: Created slice system-getty.slice. Feb 9 19:05:18.271815 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:05:18.271834 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:05:18.271847 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:05:18.271860 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:05:18.271871 systemd[1]: Created slice user.slice. Feb 9 19:05:18.271883 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:05:18.271894 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:05:18.271909 systemd[1]: Set up automount boot.automount. Feb 9 19:05:18.271920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:05:18.271932 systemd[1]: Reached target integritysetup.target. Feb 9 19:05:18.271946 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:05:18.271957 systemd[1]: Reached target remote-fs.target. Feb 9 19:05:18.271967 systemd[1]: Reached target slices.target. Feb 9 19:05:18.271980 systemd[1]: Reached target swap.target. Feb 9 19:05:18.271991 systemd[1]: Reached target torcx.target. Feb 9 19:05:18.272002 systemd[1]: Reached target veritysetup.target. Feb 9 19:05:18.272015 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:05:18.272027 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:05:18.272040 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:05:18.272050 kernel: audit: type=1400 audit(1707505517.883:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:05:18.272061 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:05:18.272072 kernel: audit: type=1335 audit(1707505517.883:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:05:18.272084 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:05:18.272096 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:05:18.272105 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:05:18.272118 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:05:18.272128 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:05:18.272142 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:05:18.272154 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:05:18.272166 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:05:18.272179 systemd[1]: Mounting media.mount... Feb 9 19:05:18.272189 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:05:18.272201 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:05:18.272212 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:05:18.272225 systemd[1]: Mounting tmp.mount... Feb 9 19:05:18.272235 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:05:18.272248 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:05:18.272264 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:05:18.272274 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:05:18.272287 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:05:18.272297 systemd[1]: Starting modprobe@drm.service... Feb 9 19:05:18.272309 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:05:18.272319 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:05:18.272331 systemd[1]: Starting modprobe@loop.service... Feb 9 19:05:18.272345 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:05:18.272358 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:05:18.272370 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:05:18.272382 systemd[1]: Starting systemd-journald.service... Feb 9 19:05:18.272393 kernel: loop: module loaded Feb 9 19:05:18.272405 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:05:18.272415 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:05:18.272425 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:05:18.272438 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:05:18.272461 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:05:18.272477 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:05:18.272488 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:05:18.272500 systemd[1]: Mounted media.mount. Feb 9 19:05:18.272510 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:05:18.272519 kernel: fuse: init (API version 7.34) Feb 9 19:05:18.272531 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:05:18.272542 systemd[1]: Mounted tmp.mount. Feb 9 19:05:18.272554 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:05:18.272563 kernel: audit: type=1130 audit(1707505518.221:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.272579 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:05:18.272592 kernel: audit: type=1130 audit(1707505518.247:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.272602 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:05:18.272616 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:05:18.272628 kernel: audit: type=1305 audit(1707505518.261:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:05:18.272642 systemd-journald[1158]: Journal started Feb 9 19:05:18.272695 systemd-journald[1158]: Runtime Journal (/run/log/journal/afc34d2cf22b402b8891cbc4e14c115a) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:05:17.883000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:05:18.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.261000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:05:18.261000 audit[1158]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1e921b60 a2=4000 a3=7ffd1e921bfc items=0 ppid=1 pid=1158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:18.298638 kernel: audit: type=1300 audit(1707505518.261:95): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1e921b60 a2=4000 a3=7ffd1e921bfc items=0 ppid=1 pid=1158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:18.261000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:05:18.310888 kernel: audit: type=1327 audit(1707505518.261:95): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:05:18.310919 systemd[1]: Started systemd-journald.service. Feb 9 19:05:18.310947 kernel: audit: type=1130 audit(1707505518.299:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.340806 kernel: audit: type=1131 audit(1707505518.299:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.340855 kernel: audit: type=1130 audit(1707505518.338:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.340743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:05:18.356413 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:05:18.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.359195 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:05:18.359766 systemd[1]: Finished modprobe@drm.service. Feb 9 19:05:18.362331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:05:18.362616 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:05:18.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.366020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:05:18.366342 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:05:18.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.369204 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:05:18.369564 systemd[1]: Finished modprobe@loop.service. Feb 9 19:05:18.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.372550 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:05:18.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.375680 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:05:18.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.379129 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:05:18.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.385683 systemd[1]: Reached target network-pre.target. Feb 9 19:05:18.389724 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:05:18.394211 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:05:18.398260 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:05:18.444216 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:05:18.448697 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:05:18.451719 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:05:18.453006 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:05:18.455413 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:05:18.456519 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:05:18.460727 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:05:18.467858 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:05:18.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.470318 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:05:18.472448 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:05:18.475380 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:05:18.485709 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:05:18.491433 systemd-journald[1158]: Time spent on flushing to /var/log/journal/afc34d2cf22b402b8891cbc4e14c115a is 24.940ms for 1116 entries. Feb 9 19:05:18.491433 systemd-journald[1158]: System Journal (/var/log/journal/afc34d2cf22b402b8891cbc4e14c115a) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:05:18.573639 systemd-journald[1158]: Received client request to flush runtime journal. Feb 9 19:05:18.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.501733 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:05:18.504410 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:05:18.575094 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:05:18.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:18.583027 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:05:19.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:19.137253 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:05:19.141921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:05:19.614939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:05:19.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:19.804429 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:05:19.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:19.808698 systemd[1]: Starting systemd-udevd.service... Feb 9 19:05:19.828648 systemd-udevd[1213]: Using default interface naming scheme 'v252'. Feb 9 19:05:20.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:20.188347 systemd[1]: Started systemd-udevd.service. Feb 9 19:05:20.194024 systemd[1]: Starting systemd-networkd.service... Feb 9 19:05:20.254641 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:05:20.326854 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:05:20.326996 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:05:20.341071 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:05:20.341176 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:05:20.341210 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:05:20.950407 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:05:20.950518 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:05:20.951392 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:05:20.952390 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:05:20.962495 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:05:20.965156 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:05:20.965227 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:05:20.340000 audit[1214]: AVC avc: denied { confidentiality } for pid=1214 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:05:21.071988 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:05:21.078617 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:05:21.082791 systemd[1]: Started systemd-userdbd.service. Feb 9 19:05:21.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:20.340000 audit[1214]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c212bad3b0 a1=f884 a2=7f805dfa5bc5 a3=5 items=12 ppid=1213 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:20.340000 audit: CWD cwd="/" Feb 9 19:05:20.340000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=1 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=2 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=3 name=(null) inode=15802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=4 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=5 name=(null) inode=15803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=6 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=7 name=(null) inode=15804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=8 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=9 name=(null) inode=15805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=10 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PATH item=11 name=(null) inode=15806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:20.340000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:05:21.202396 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1214) Feb 9 19:05:21.219405 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:05:21.247420 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:05:21.346805 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:05:21.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:21.351255 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:05:21.424996 systemd-networkd[1223]: lo: Link UP Feb 9 19:05:21.425008 systemd-networkd[1223]: lo: Gained carrier Feb 9 19:05:21.425813 systemd-networkd[1223]: Enumeration completed Feb 9 19:05:21.425979 systemd[1]: Started systemd-networkd.service. Feb 9 19:05:21.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:21.429922 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:05:21.455976 systemd-networkd[1223]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:05:21.512395 kernel: mlx5_core 1fb3:00:02.0 enP8115s1: Link up Feb 9 19:05:21.552390 kernel: hv_netvsc 0022489d-990c-0022-489d-990c0022489d eth0: Data path switched to VF: enP8115s1 Feb 9 19:05:21.553608 systemd-networkd[1223]: enP8115s1: Link UP Feb 9 19:05:21.553783 systemd-networkd[1223]: eth0: Link UP Feb 9 19:05:21.553798 systemd-networkd[1223]: eth0: Gained carrier Feb 9 19:05:21.558700 systemd-networkd[1223]: enP8115s1: Gained carrier Feb 9 19:05:21.592515 systemd-networkd[1223]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:05:21.780570 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:05:21.810198 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:05:21.813108 systemd[1]: Reached target cryptsetup.target. Feb 9 19:05:21.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:21.816837 systemd[1]: Starting lvm2-activation.service... Feb 9 19:05:21.823174 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:05:21.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:21.842500 systemd[1]: Finished lvm2-activation.service. Feb 9 19:05:21.845541 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:05:21.848261 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:05:21.848297 systemd[1]: Reached target local-fs.target. Feb 9 19:05:21.850892 systemd[1]: Reached target machines.target. Feb 9 19:05:21.855058 systemd[1]: Starting ldconfig.service... Feb 9 19:05:21.857590 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:05:21.857705 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:05:21.859319 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:05:21.862643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:05:21.866665 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:05:21.869433 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:05:21.869540 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:05:21.870687 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:05:21.911412 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:05:21.927578 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:05:22.437263 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1296 (bootctl) Feb 9 19:05:22.439106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:05:22.452476 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:05:22.460811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:05:22.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:22.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:22.758271 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:05:22.759280 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:05:22.890797 systemd-networkd[1223]: eth0: Gained IPv6LL Feb 9 19:05:22.894771 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:05:22.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.354244 systemd-fsck[1305]: fsck.fat 4.2 (2021-01-31) Feb 9 19:05:23.354244 systemd-fsck[1305]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:05:23.356839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:05:23.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.362219 systemd[1]: Mounting boot.mount... Feb 9 19:05:23.377685 systemd[1]: Mounted boot.mount. Feb 9 19:05:23.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.391745 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:05:23.596106 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:05:23.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.601211 systemd[1]: Starting audit-rules.service... Feb 9 19:05:23.605583 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 19:05:23.605656 kernel: audit: type=1130 audit(1707505523.597:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.617913 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:05:23.621635 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:05:23.626066 systemd[1]: Starting systemd-resolved.service... Feb 9 19:05:23.630436 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:05:23.634443 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:05:23.641031 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:05:23.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.643886 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:05:23.655446 kernel: audit: type=1130 audit(1707505523.642:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.665000 audit[1326]: SYSTEM_BOOT pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.679393 kernel: audit: type=1127 audit(1707505523.665:133): pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.681903 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:05:23.696446 kernel: audit: type=1130 audit(1707505523.682:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.756886 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:05:23.759841 systemd[1]: Reached target time-set.target. Feb 9 19:05:23.771390 kernel: audit: type=1130 audit(1707505523.759:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:05:23.888000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:05:23.889735 augenrules[1340]: No rules Feb 9 19:05:23.890740 systemd[1]: Finished audit-rules.service. Feb 9 19:05:23.897699 kernel: audit: type=1305 audit(1707505523.888:136): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:05:23.897764 kernel: audit: type=1300 audit(1707505523.888:136): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5be8f4a0 a2=420 a3=0 items=0 ppid=1317 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.897784 kernel: audit: type=1327 audit(1707505523.888:136): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:05:23.888000 audit[1340]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5be8f4a0 a2=420 a3=0 items=0 ppid=1317 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.888000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:05:23.933045 systemd-timesyncd[1323]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:05:23.933113 systemd-timesyncd[1323]: Initial clock synchronization to Fri 2024-02-09 19:05:23.935098 UTC. Feb 9 19:05:23.953607 systemd-resolved[1321]: Positive Trust Anchors: Feb 9 19:05:23.953623 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:05:23.953661 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:05:23.992182 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:05:24.107938 systemd-resolved[1321]: Using system hostname 'ci-3510.3.2-a-2a68512ec5'. Feb 9 19:05:24.110023 systemd[1]: Started systemd-resolved.service. Feb 9 19:05:24.112688 systemd[1]: Reached target network.target. Feb 9 19:05:24.114983 systemd[1]: Reached target network-online.target. Feb 9 19:05:24.117531 systemd[1]: Reached target nss-lookup.target. Feb 9 19:05:30.135662 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:05:30.145000 systemd[1]: Finished ldconfig.service. Feb 9 19:05:30.150015 systemd[1]: Starting systemd-update-done.service... Feb 9 19:05:30.158713 systemd[1]: Finished systemd-update-done.service. Feb 9 19:05:30.161265 systemd[1]: Reached target sysinit.target. Feb 9 19:05:30.163683 systemd[1]: Started motdgen.path. Feb 9 19:05:30.165521 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:05:30.168521 systemd[1]: Started logrotate.timer. Feb 9 19:05:30.170463 systemd[1]: Started mdadm.timer. Feb 9 19:05:30.172129 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:05:30.174353 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:05:30.174403 systemd[1]: Reached target paths.target. Feb 9 19:05:30.176288 systemd[1]: Reached target timers.target. Feb 9 19:05:30.178788 systemd[1]: Listening on dbus.socket. Feb 9 19:05:30.181885 systemd[1]: Starting docker.socket... Feb 9 19:05:30.198958 systemd[1]: Listening on sshd.socket. Feb 9 19:05:30.201004 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:05:30.201479 systemd[1]: Listening on docker.socket. Feb 9 19:05:30.203455 systemd[1]: Reached target sockets.target. Feb 9 19:05:30.205542 systemd[1]: Reached target basic.target. Feb 9 19:05:30.207553 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:05:30.207615 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:05:30.207644 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:05:30.208667 systemd[1]: Starting containerd.service... Feb 9 19:05:30.211897 systemd[1]: Starting dbus.service... Feb 9 19:05:30.214972 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:05:30.218541 systemd[1]: Starting extend-filesystems.service... Feb 9 19:05:30.220698 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:05:30.222241 systemd[1]: Starting motdgen.service... Feb 9 19:05:30.225356 systemd[1]: Started nvidia.service. Feb 9 19:05:30.228498 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:05:30.232263 systemd[1]: Starting prepare-critools.service... Feb 9 19:05:30.235900 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:05:30.240578 systemd[1]: Starting sshd-keygen.service... Feb 9 19:05:30.249312 systemd[1]: Starting systemd-logind.service... Feb 9 19:05:30.252913 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:05:30.252999 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:05:30.255112 systemd[1]: Starting update-engine.service... Feb 9 19:05:30.259801 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:05:30.269187 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:05:30.269589 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:05:30.389113 systemd-logind[1365]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:05:30.390002 systemd-logind[1365]: New seat seat0. Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda1 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda2 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda3 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found usr Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda4 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda6 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda7 Feb 9 19:05:30.392448 extend-filesystems[1356]: Found sda9 Feb 9 19:05:30.392448 extend-filesystems[1356]: Checking size of /dev/sda9 Feb 9 19:05:30.447517 jq[1355]: false Feb 9 19:05:30.447626 tar[1374]: crictl Feb 9 19:05:30.447904 tar[1372]: ./ Feb 9 19:05:30.447904 tar[1372]: ./macvlan Feb 9 19:05:30.421812 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:05:30.448252 jq[1367]: true Feb 9 19:05:30.422146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:05:30.428037 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:05:30.428327 systemd[1]: Finished motdgen.service. Feb 9 19:05:30.469433 jq[1410]: true Feb 9 19:05:30.464192 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:05:30.469708 extend-filesystems[1356]: Old size kept for /dev/sda9 Feb 9 19:05:30.469708 extend-filesystems[1356]: Found sr0 Feb 9 19:05:30.464521 systemd[1]: Finished extend-filesystems.service. Feb 9 19:05:30.537422 env[1408]: time="2024-02-09T19:05:30.537351265Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:05:30.571673 tar[1372]: ./static Feb 9 19:05:30.624317 dbus-daemon[1353]: [system] SELinux support is enabled Feb 9 19:05:30.624568 systemd[1]: Started dbus.service. Feb 9 19:05:30.630469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:05:30.630512 systemd[1]: Reached target system-config.target. Feb 9 19:05:30.632969 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:05:30.632996 systemd[1]: Reached target user-config.target. Feb 9 19:05:30.639286 systemd[1]: Started systemd-logind.service. Feb 9 19:05:30.667326 tar[1372]: ./vlan Feb 9 19:05:30.671913 bash[1436]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:05:30.672323 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:05:30.698771 env[1408]: time="2024-02-09T19:05:30.698707033Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:05:30.699041 env[1408]: time="2024-02-09T19:05:30.699018661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.706556 env[1408]: time="2024-02-09T19:05:30.706520443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:05:30.706664 env[1408]: time="2024-02-09T19:05:30.706647854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.707102 env[1408]: time="2024-02-09T19:05:30.707075093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:05:30.710423 env[1408]: time="2024-02-09T19:05:30.710398395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.710536 env[1408]: time="2024-02-09T19:05:30.710519206Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:05:30.710605 env[1408]: time="2024-02-09T19:05:30.710592413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.710754 env[1408]: time="2024-02-09T19:05:30.710739426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.711070 env[1408]: time="2024-02-09T19:05:30.711052455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:05:30.711440 env[1408]: time="2024-02-09T19:05:30.711418788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:05:30.711521 env[1408]: time="2024-02-09T19:05:30.711508496Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:05:30.711634 env[1408]: time="2024-02-09T19:05:30.711616606Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:05:30.711706 env[1408]: time="2024-02-09T19:05:30.711694013Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:05:30.728127 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:05:30.759322 env[1408]: time="2024-02-09T19:05:30.759278039Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:05:30.759462 env[1408]: time="2024-02-09T19:05:30.759334644Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:05:30.759462 env[1408]: time="2024-02-09T19:05:30.759353245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:05:30.759462 env[1408]: time="2024-02-09T19:05:30.759423452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759462 env[1408]: time="2024-02-09T19:05:30.759448254Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759609 env[1408]: time="2024-02-09T19:05:30.759525061Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759609 env[1408]: time="2024-02-09T19:05:30.759546963Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759609 env[1408]: time="2024-02-09T19:05:30.759568565Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759609 env[1408]: time="2024-02-09T19:05:30.759589167Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759749 env[1408]: time="2024-02-09T19:05:30.759609469Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759749 env[1408]: time="2024-02-09T19:05:30.759628070Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.759749 env[1408]: time="2024-02-09T19:05:30.759651973Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:05:30.759854 env[1408]: time="2024-02-09T19:05:30.759786085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:05:30.759910 env[1408]: time="2024-02-09T19:05:30.759890094Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:05:30.760420 env[1408]: time="2024-02-09T19:05:30.760387539Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:05:30.760502 env[1408]: time="2024-02-09T19:05:30.760440544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760502 env[1408]: time="2024-02-09T19:05:30.760463546Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:05:30.760583 env[1408]: time="2024-02-09T19:05:30.760527852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760583 env[1408]: time="2024-02-09T19:05:30.760549954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760583 env[1408]: time="2024-02-09T19:05:30.760568756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760585157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760607059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760623761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760641562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760658364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760685 env[1408]: time="2024-02-09T19:05:30.760678266Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:05:30.760897 env[1408]: time="2024-02-09T19:05:30.760838680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760897 env[1408]: time="2024-02-09T19:05:30.760859882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.760897 env[1408]: time="2024-02-09T19:05:30.760878284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.761029 env[1408]: time="2024-02-09T19:05:30.760895086Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:05:30.761029 env[1408]: time="2024-02-09T19:05:30.760915987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:05:30.761029 env[1408]: time="2024-02-09T19:05:30.760943390Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:05:30.761029 env[1408]: time="2024-02-09T19:05:30.760970292Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:05:30.761162 env[1408]: time="2024-02-09T19:05:30.761026998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:05:30.761361 env[1408]: time="2024-02-09T19:05:30.761292422Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.761395031Z" level=info msg="Connect containerd service" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.761443935Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.762636844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.762990076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.763048481Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.765683421Z" level=info msg="containerd successfully booted in 0.230338s" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782754073Z" level=info msg="Start subscribing containerd event" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782822779Z" level=info msg="Start recovering state" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782907987Z" level=info msg="Start event monitor" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782936589Z" level=info msg="Start snapshots syncer" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782954591Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:05:30.836259 env[1408]: time="2024-02-09T19:05:30.782968292Z" level=info msg="Start streaming server" Feb 9 19:05:30.836741 tar[1372]: ./portmap Feb 9 19:05:30.763198 systemd[1]: Started containerd.service. Feb 9 19:05:30.852535 tar[1372]: ./host-local Feb 9 19:05:30.891104 tar[1372]: ./vrf Feb 9 19:05:30.960523 tar[1372]: ./bridge Feb 9 19:05:31.010200 tar[1372]: ./tuning Feb 9 19:05:31.049432 tar[1372]: ./firewall Feb 9 19:05:31.117450 tar[1372]: ./host-device Feb 9 19:05:31.197154 tar[1372]: ./sbr Feb 9 19:05:31.273562 tar[1372]: ./loopback Feb 9 19:05:31.323007 update_engine[1366]: I0209 19:05:31.322346 1366 main.cc:92] Flatcar Update Engine starting Feb 9 19:05:31.339919 tar[1372]: ./dhcp Feb 9 19:05:31.372500 systemd[1]: Started update-engine.service. Feb 9 19:05:31.374527 update_engine[1366]: I0209 19:05:31.374420 1366 update_check_scheduler.cc:74] Next update check in 3m26s Feb 9 19:05:31.377946 systemd[1]: Started locksmithd.service. Feb 9 19:05:31.389479 systemd[1]: Finished prepare-critools.service. Feb 9 19:05:31.477622 tar[1372]: ./ptp Feb 9 19:05:31.519659 tar[1372]: ./ipvlan Feb 9 19:05:31.560519 tar[1372]: ./bandwidth Feb 9 19:05:31.644766 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:05:32.470575 sshd_keygen[1379]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:05:32.491003 systemd[1]: Finished sshd-keygen.service. Feb 9 19:05:32.496079 systemd[1]: Starting issuegen.service... Feb 9 19:05:32.501208 systemd[1]: Started waagent.service. Feb 9 19:05:32.508278 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:05:32.508581 systemd[1]: Finished issuegen.service. Feb 9 19:05:32.512556 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:05:32.521772 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:05:32.526919 systemd[1]: Started getty@tty1.service. Feb 9 19:05:32.530433 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:05:32.533044 systemd[1]: Reached target getty.target. Feb 9 19:05:32.535249 systemd[1]: Reached target multi-user.target. Feb 9 19:05:32.538896 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:05:32.547031 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:05:32.547321 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:05:32.552760 systemd[1]: Startup finished in 915ms (firmware) + 29.025s (loader) + 18.547s (kernel) + 26.631s (userspace) = 1min 15.120s. Feb 9 19:05:32.963062 login[1504]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:05:32.965096 login[1505]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:05:32.998829 systemd[1]: Created slice user-500.slice. Feb 9 19:05:33.000263 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:05:33.004957 systemd-logind[1365]: New session 2 of user core. Feb 9 19:05:33.011803 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:05:33.013512 systemd[1]: Starting user@500.service... Feb 9 19:05:33.034125 (systemd)[1511]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:33.164658 systemd[1511]: Queued start job for default target default.target. Feb 9 19:05:33.165011 systemd[1511]: Reached target paths.target. Feb 9 19:05:33.165037 systemd[1511]: Reached target sockets.target. Feb 9 19:05:33.165059 systemd[1511]: Reached target timers.target. Feb 9 19:05:33.165080 systemd[1511]: Reached target basic.target. Feb 9 19:05:33.165143 systemd[1511]: Reached target default.target. Feb 9 19:05:33.165184 systemd[1511]: Startup finished in 124ms. Feb 9 19:05:33.165273 systemd[1]: Started user@500.service. Feb 9 19:05:33.166885 systemd[1]: Started session-2.scope. Feb 9 19:05:33.175047 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:05:33.963498 login[1504]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:05:33.969872 systemd[1]: Started session-1.scope. Feb 9 19:05:33.970818 systemd-logind[1365]: New session 1 of user core. Feb 9 19:05:39.345117 waagent[1497]: 2024-02-09T19:05:39.344981Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:05:39.365061 waagent[1497]: 2024-02-09T19:05:39.364976Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:05:39.368155 waagent[1497]: 2024-02-09T19:05:39.368089Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:05:39.371019 waagent[1497]: 2024-02-09T19:05:39.370944Z INFO Daemon Daemon Run daemon Feb 9 19:05:39.373732 waagent[1497]: 2024-02-09T19:05:39.373666Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:05:39.387241 waagent[1497]: 2024-02-09T19:05:39.387125Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:05:39.394489 waagent[1497]: 2024-02-09T19:05:39.394384Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:05:39.401173 waagent[1497]: 2024-02-09T19:05:39.394786Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:05:39.401173 waagent[1497]: 2024-02-09T19:05:39.395819Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:05:39.401173 waagent[1497]: 2024-02-09T19:05:39.397419Z INFO Daemon Daemon Activate resource disk Feb 9 19:05:39.401173 waagent[1497]: 2024-02-09T19:05:39.398367Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.406277Z INFO Daemon Daemon Found device: None Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.407512Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.408428Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.410163Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.411271Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.421025Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.423808Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.424691Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:05:39.440443 waagent[1497]: 2024-02-09T19:05:39.425960Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:05:39.538443 waagent[1497]: 2024-02-09T19:05:39.538226Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:05:39.626776 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:05:39.647206 waagent[1497]: 2024-02-09T19:05:39.647066Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:05:39.650448 waagent[1497]: 2024-02-09T19:05:39.650352Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:05:39.653830 waagent[1497]: 2024-02-09T19:05:39.653766Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:05:39.657442 waagent[1497]: 2024-02-09T19:05:39.657366Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:05:39.660677 waagent[1497]: 2024-02-09T19:05:39.660615Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:05:39.663522 waagent[1497]: 2024-02-09T19:05:39.663461Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:05:39.858085 waagent[1497]: 2024-02-09T19:05:39.858004Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:05:39.866806 waagent[1497]: 2024-02-09T19:05:39.858944Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:05:39.866806 waagent[1497]: 2024-02-09T19:05:39.860026Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:05:40.226769 waagent[1497]: 2024-02-09T19:05:40.226609Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:05:40.239934 waagent[1497]: 2024-02-09T19:05:40.239854Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:05:40.243213 waagent[1497]: 2024-02-09T19:05:40.243144Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:05:40.324198 waagent[1497]: 2024-02-09T19:05:40.324062Z INFO Daemon Daemon Found private key matching thumbprint 04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0 Feb 9 19:05:40.336227 waagent[1497]: 2024-02-09T19:05:40.324667Z INFO Daemon Daemon Certificate with thumbprint 2676C4A6948D5AAD43DA940E7BC6AC1D3F6A47A8 has no matching private key. Feb 9 19:05:40.336227 waagent[1497]: 2024-02-09T19:05:40.325885Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:05:40.366687 waagent[1497]: 2024-02-09T19:05:40.366589Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 7268b480-5705-4e65-b85f-bcf3dcd3a5fd New eTag: 17862325162028876038] Feb 9 19:05:40.376224 waagent[1497]: 2024-02-09T19:05:40.367716Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:05:40.382957 waagent[1497]: 2024-02-09T19:05:40.382892Z INFO Daemon Daemon Starting provisioning Feb 9 19:05:40.390163 waagent[1497]: 2024-02-09T19:05:40.383219Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:05:40.390163 waagent[1497]: 2024-02-09T19:05:40.384340Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-2a68512ec5] Feb 9 19:05:40.404914 waagent[1497]: 2024-02-09T19:05:40.404807Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-2a68512ec5] Feb 9 19:05:40.413167 waagent[1497]: 2024-02-09T19:05:40.405530Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:05:40.413167 waagent[1497]: 2024-02-09T19:05:40.406678Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:05:40.420150 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:05:40.420500 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:05:40.420590 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:05:40.420898 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:05:40.425416 systemd-networkd[1223]: eth0: DHCPv6 lease lost Feb 9 19:05:40.426937 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:05:40.427184 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:05:40.429347 systemd[1]: Starting systemd-networkd.service... Feb 9 19:05:40.467102 systemd-networkd[1556]: enP8115s1: Link UP Feb 9 19:05:40.467112 systemd-networkd[1556]: enP8115s1: Gained carrier Feb 9 19:05:40.468460 systemd-networkd[1556]: eth0: Link UP Feb 9 19:05:40.468470 systemd-networkd[1556]: eth0: Gained carrier Feb 9 19:05:40.468906 systemd-networkd[1556]: lo: Link UP Feb 9 19:05:40.468916 systemd-networkd[1556]: lo: Gained carrier Feb 9 19:05:40.469227 systemd-networkd[1556]: eth0: Gained IPv6LL Feb 9 19:05:40.469521 systemd-networkd[1556]: Enumeration completed Feb 9 19:05:40.469657 systemd[1]: Started systemd-networkd.service. Feb 9 19:05:40.471932 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:05:40.476931 waagent[1497]: 2024-02-09T19:05:40.474931Z INFO Daemon Daemon Create user account if not exists Feb 9 19:05:40.476931 waagent[1497]: 2024-02-09T19:05:40.475699Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:05:40.477327 waagent[1497]: 2024-02-09T19:05:40.477264Z INFO Daemon Daemon Configure sudoer Feb 9 19:05:40.479140 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:05:40.490892 waagent[1497]: 2024-02-09T19:05:40.490819Z INFO Daemon Daemon Configure sshd Feb 9 19:05:40.495034 waagent[1497]: 2024-02-09T19:05:40.491121Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:05:40.526482 systemd-networkd[1556]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:05:40.529947 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:05:41.773896 waagent[1497]: 2024-02-09T19:05:41.773795Z INFO Daemon Daemon Provisioning complete Feb 9 19:05:41.788663 waagent[1497]: 2024-02-09T19:05:41.788593Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:05:41.795763 waagent[1497]: 2024-02-09T19:05:41.789023Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:05:41.795763 waagent[1497]: 2024-02-09T19:05:41.790754Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:05:42.056541 waagent[1567]: 2024-02-09T19:05:42.056344Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:05:42.057267 waagent[1567]: 2024-02-09T19:05:42.057199Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:42.057428 waagent[1567]: 2024-02-09T19:05:42.057358Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:42.068472 waagent[1567]: 2024-02-09T19:05:42.068402Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:05:42.068634 waagent[1567]: 2024-02-09T19:05:42.068583Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:05:42.128301 waagent[1567]: 2024-02-09T19:05:42.128187Z INFO ExtHandler ExtHandler Found private key matching thumbprint 04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0 Feb 9 19:05:42.128526 waagent[1567]: 2024-02-09T19:05:42.128464Z INFO ExtHandler ExtHandler Certificate with thumbprint 2676C4A6948D5AAD43DA940E7BC6AC1D3F6A47A8 has no matching private key. Feb 9 19:05:42.128756 waagent[1567]: 2024-02-09T19:05:42.128705Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:05:42.143207 waagent[1567]: 2024-02-09T19:05:42.143143Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: ebe126e7-e484-47fd-8827-c209a82b9ad9 New eTag: 17862325162028876038] Feb 9 19:05:42.143789 waagent[1567]: 2024-02-09T19:05:42.143730Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:05:42.256755 waagent[1567]: 2024-02-09T19:05:42.256586Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:05:42.283626 waagent[1567]: 2024-02-09T19:05:42.282612Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1567 Feb 9 19:05:42.288005 waagent[1567]: 2024-02-09T19:05:42.287934Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:05:42.291957 waagent[1567]: 2024-02-09T19:05:42.289890Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:05:42.373501 waagent[1567]: 2024-02-09T19:05:42.373425Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:05:42.373972 waagent[1567]: 2024-02-09T19:05:42.373897Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:05:42.382467 waagent[1567]: 2024-02-09T19:05:42.382412Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:05:42.382919 waagent[1567]: 2024-02-09T19:05:42.382860Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:05:42.383976 waagent[1567]: 2024-02-09T19:05:42.383910Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:05:42.385292 waagent[1567]: 2024-02-09T19:05:42.385234Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:05:42.385785 waagent[1567]: 2024-02-09T19:05:42.385730Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:42.385941 waagent[1567]: 2024-02-09T19:05:42.385894Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:42.386488 waagent[1567]: 2024-02-09T19:05:42.386430Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:05:42.387032 waagent[1567]: 2024-02-09T19:05:42.386975Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:05:42.387249 waagent[1567]: 2024-02-09T19:05:42.387196Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:42.387709 waagent[1567]: 2024-02-09T19:05:42.387656Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:42.387864 waagent[1567]: 2024-02-09T19:05:42.387791Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:05:42.387864 waagent[1567]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:05:42.387864 waagent[1567]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:05:42.387864 waagent[1567]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:05:42.387864 waagent[1567]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:42.387864 waagent[1567]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:42.387864 waagent[1567]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:42.388316 waagent[1567]: 2024-02-09T19:05:42.388261Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:05:42.388658 waagent[1567]: 2024-02-09T19:05:42.388606Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:05:42.391297 waagent[1567]: 2024-02-09T19:05:42.391078Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:05:42.392783 waagent[1567]: 2024-02-09T19:05:42.392727Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:05:42.393055 waagent[1567]: 2024-02-09T19:05:42.392981Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:05:42.393786 waagent[1567]: 2024-02-09T19:05:42.393724Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:05:42.393989 waagent[1567]: 2024-02-09T19:05:42.393929Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:05:42.394255 waagent[1567]: 2024-02-09T19:05:42.394205Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:05:42.404872 waagent[1567]: 2024-02-09T19:05:42.404820Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:05:42.405474 waagent[1567]: 2024-02-09T19:05:42.405425Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:05:42.406276 waagent[1567]: 2024-02-09T19:05:42.406222Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:05:42.422783 waagent[1567]: 2024-02-09T19:05:42.422717Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1556' Feb 9 19:05:42.452050 waagent[1567]: 2024-02-09T19:05:42.451975Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:05:42.549165 waagent[1567]: 2024-02-09T19:05:42.549045Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:05:42.549165 waagent[1567]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:05:42.549165 waagent[1567]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:05:42.549165 waagent[1567]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:99:0c brd ff:ff:ff:ff:ff:ff Feb 9 19:05:42.549165 waagent[1567]: 3: enP8115s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:99:0c brd ff:ff:ff:ff:ff:ff\ altname enP8115p0s2 Feb 9 19:05:42.549165 waagent[1567]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:05:42.549165 waagent[1567]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:05:42.549165 waagent[1567]: 2: eth0 inet 10.200.8.48/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:05:42.549165 waagent[1567]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:05:42.549165 waagent[1567]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:05:42.549165 waagent[1567]: 2: eth0 inet6 fe80::222:48ff:fe9d:990c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:05:42.801196 waagent[1567]: 2024-02-09T19:05:42.801069Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:05:43.794935 waagent[1497]: 2024-02-09T19:05:43.794746Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:05:43.801363 waagent[1497]: 2024-02-09T19:05:43.801290Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:05:44.819232 waagent[1607]: 2024-02-09T19:05:44.819114Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:05:44.819989 waagent[1607]: 2024-02-09T19:05:44.819917Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:05:44.820140 waagent[1607]: 2024-02-09T19:05:44.820084Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:05:44.829815 waagent[1607]: 2024-02-09T19:05:44.829715Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:05:44.830194 waagent[1607]: 2024-02-09T19:05:44.830137Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:44.830365 waagent[1607]: 2024-02-09T19:05:44.830315Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:44.842275 waagent[1607]: 2024-02-09T19:05:44.842199Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:05:44.850803 waagent[1607]: 2024-02-09T19:05:44.850741Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:05:44.851712 waagent[1607]: 2024-02-09T19:05:44.851651Z INFO ExtHandler Feb 9 19:05:44.851857 waagent[1607]: 2024-02-09T19:05:44.851807Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 436503fc-8356-4521-8222-b9e6aca96944 eTag: 17862325162028876038 source: Fabric] Feb 9 19:05:44.852567 waagent[1607]: 2024-02-09T19:05:44.852509Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:05:44.853660 waagent[1607]: 2024-02-09T19:05:44.853599Z INFO ExtHandler Feb 9 19:05:44.853793 waagent[1607]: 2024-02-09T19:05:44.853742Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:05:44.860602 waagent[1607]: 2024-02-09T19:05:44.860551Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:05:44.861031 waagent[1607]: 2024-02-09T19:05:44.860983Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:05:44.880517 waagent[1607]: 2024-02-09T19:05:44.880461Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:05:44.942433 waagent[1607]: 2024-02-09T19:05:44.942274Z INFO ExtHandler Downloaded certificate {'thumbprint': '2676C4A6948D5AAD43DA940E7BC6AC1D3F6A47A8', 'hasPrivateKey': False} Feb 9 19:05:44.943310 waagent[1607]: 2024-02-09T19:05:44.943249Z INFO ExtHandler Downloaded certificate {'thumbprint': '04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0', 'hasPrivateKey': True} Feb 9 19:05:44.944307 waagent[1607]: 2024-02-09T19:05:44.944242Z INFO ExtHandler Fetch goal state completed Feb 9 19:05:44.966088 waagent[1607]: 2024-02-09T19:05:44.966001Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1607 Feb 9 19:05:44.969385 waagent[1607]: 2024-02-09T19:05:44.969308Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:05:44.970815 waagent[1607]: 2024-02-09T19:05:44.970756Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:05:44.975442 waagent[1607]: 2024-02-09T19:05:44.975367Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:05:44.975791 waagent[1607]: 2024-02-09T19:05:44.975734Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:05:44.983626 waagent[1607]: 2024-02-09T19:05:44.983573Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:05:44.984067 waagent[1607]: 2024-02-09T19:05:44.984010Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:05:44.990016 waagent[1607]: 2024-02-09T19:05:44.989925Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:05:44.994695 waagent[1607]: 2024-02-09T19:05:44.994635Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:05:44.996070 waagent[1607]: 2024-02-09T19:05:44.996013Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:05:44.996412 waagent[1607]: 2024-02-09T19:05:44.996339Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:44.996837 waagent[1607]: 2024-02-09T19:05:44.996781Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:44.997391 waagent[1607]: 2024-02-09T19:05:44.997316Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:05:44.997682 waagent[1607]: 2024-02-09T19:05:44.997623Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:05:44.997682 waagent[1607]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:05:44.997682 waagent[1607]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:05:44.997682 waagent[1607]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:05:44.997682 waagent[1607]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:44.997682 waagent[1607]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:44.997682 waagent[1607]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:44.999820 waagent[1607]: 2024-02-09T19:05:44.999729Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:05:45.000791 waagent[1607]: 2024-02-09T19:05:45.000737Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:05:45.000880 waagent[1607]: 2024-02-09T19:05:45.000641Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:05:45.004144 waagent[1607]: 2024-02-09T19:05:45.004044Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:45.004521 waagent[1607]: 2024-02-09T19:05:45.004462Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:05:45.004658 waagent[1607]: 2024-02-09T19:05:45.004587Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:45.004774 waagent[1607]: 2024-02-09T19:05:45.004703Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:05:45.005328 waagent[1607]: 2024-02-09T19:05:45.005272Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:05:45.005929 waagent[1607]: 2024-02-09T19:05:45.005876Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:05:45.008449 waagent[1607]: 2024-02-09T19:05:45.008152Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:05:45.009988 waagent[1607]: 2024-02-09T19:05:45.009930Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:05:45.022781 waagent[1607]: 2024-02-09T19:05:45.022718Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:05:45.022781 waagent[1607]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:05:45.022781 waagent[1607]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:05:45.022781 waagent[1607]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:99:0c brd ff:ff:ff:ff:ff:ff Feb 9 19:05:45.022781 waagent[1607]: 3: enP8115s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:99:0c brd ff:ff:ff:ff:ff:ff\ altname enP8115p0s2 Feb 9 19:05:45.022781 waagent[1607]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:05:45.022781 waagent[1607]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:05:45.022781 waagent[1607]: 2: eth0 inet 10.200.8.48/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:05:45.022781 waagent[1607]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:05:45.022781 waagent[1607]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:05:45.022781 waagent[1607]: 2: eth0 inet6 fe80::222:48ff:fe9d:990c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:05:45.032321 waagent[1607]: 2024-02-09T19:05:45.032234Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:05:45.032888 waagent[1607]: 2024-02-09T19:05:45.032831Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:05:45.096177 waagent[1607]: 2024-02-09T19:05:45.096043Z INFO ExtHandler ExtHandler Feb 9 19:05:45.098284 waagent[1607]: 2024-02-09T19:05:45.098161Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6bce7187-2470-4b7a-8fb9-cb02dc78536f correlation a819e91d-99cf-418a-a73f-a4206b608541 created: 2024-02-09T19:04:08.077469Z] Feb 9 19:05:45.103402 waagent[1607]: 2024-02-09T19:05:45.103260Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:05:45.106801 waagent[1607]: 2024-02-09T19:05:45.106739Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Feb 9 19:05:45.128330 waagent[1607]: 2024-02-09T19:05:45.128219Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:05:45.128330 waagent[1607]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.128330 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.128330 waagent[1607]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.128330 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.128330 waagent[1607]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.128330 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.128330 waagent[1607]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:05:45.128330 waagent[1607]: 8 3184 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:05:45.128330 waagent[1607]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:05:45.137264 waagent[1607]: 2024-02-09T19:05:45.137190Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:05:45.141419 waagent[1607]: 2024-02-09T19:05:45.141278Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:05:45.141419 waagent[1607]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.141419 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.141419 waagent[1607]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.141419 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.141419 waagent[1607]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:45.141419 waagent[1607]: pkts bytes target prot opt in out source destination Feb 9 19:05:45.141419 waagent[1607]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:05:45.141419 waagent[1607]: 12 3770 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:05:45.141419 waagent[1607]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:05:45.142273 waagent[1607]: 2024-02-09T19:05:45.142215Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:05:45.147598 waagent[1607]: 2024-02-09T19:05:45.147528Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B8A50475-187A-4778-8CBB-9D1E203CA8FA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:06:09.090250 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:06:15.633269 systemd[1]: Created slice system-sshd.slice. Feb 9 19:06:15.635270 systemd[1]: Started sshd@0-10.200.8.48:22-10.200.12.6:47272.service. Feb 9 19:06:16.394600 update_engine[1366]: I0209 19:06:16.394523 1366 update_attempter.cc:509] Updating boot flags... Feb 9 19:06:16.486408 sshd[1651]: Accepted publickey for core from 10.200.12.6 port 47272 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:16.485728 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:16.506165 systemd[1]: Started session-3.scope. Feb 9 19:06:16.506773 systemd-logind[1365]: New session 3 of user core. Feb 9 19:06:17.018069 systemd[1]: Started sshd@1-10.200.8.48:22-10.200.12.6:51050.service. Feb 9 19:06:17.639998 sshd[1722]: Accepted publickey for core from 10.200.12.6 port 51050 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:17.641697 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:17.646895 systemd[1]: Started session-4.scope. Feb 9 19:06:17.647159 systemd-logind[1365]: New session 4 of user core. Feb 9 19:06:18.078950 sshd[1722]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:18.082341 systemd[1]: sshd@1-10.200.8.48:22-10.200.12.6:51050.service: Deactivated successfully. Feb 9 19:06:18.084462 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:06:18.085187 systemd-logind[1365]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:06:18.086583 systemd-logind[1365]: Removed session 4. Feb 9 19:06:18.183367 systemd[1]: Started sshd@2-10.200.8.48:22-10.200.12.6:51054.service. Feb 9 19:06:18.803407 sshd[1729]: Accepted publickey for core from 10.200.12.6 port 51054 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:18.804864 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:18.809436 systemd-logind[1365]: New session 5 of user core. Feb 9 19:06:18.809935 systemd[1]: Started session-5.scope. Feb 9 19:06:19.237825 sshd[1729]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:19.241006 systemd[1]: sshd@2-10.200.8.48:22-10.200.12.6:51054.service: Deactivated successfully. Feb 9 19:06:19.242286 systemd-logind[1365]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:06:19.242402 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:06:19.243607 systemd-logind[1365]: Removed session 5. Feb 9 19:06:19.341814 systemd[1]: Started sshd@3-10.200.8.48:22-10.200.12.6:51064.service. Feb 9 19:06:19.967132 sshd[1736]: Accepted publickey for core from 10.200.12.6 port 51064 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:19.968628 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:19.973480 systemd-logind[1365]: New session 6 of user core. Feb 9 19:06:19.973742 systemd[1]: Started session-6.scope. Feb 9 19:06:20.409730 sshd[1736]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:20.413023 systemd[1]: sshd@3-10.200.8.48:22-10.200.12.6:51064.service: Deactivated successfully. Feb 9 19:06:20.414339 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:06:20.415920 systemd-logind[1365]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:06:20.417068 systemd-logind[1365]: Removed session 6. Feb 9 19:06:20.512747 systemd[1]: Started sshd@4-10.200.8.48:22-10.200.12.6:51068.service. Feb 9 19:06:21.135067 sshd[1743]: Accepted publickey for core from 10.200.12.6 port 51068 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:21.136821 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:21.141821 systemd[1]: Started session-7.scope. Feb 9 19:06:21.142080 systemd-logind[1365]: New session 7 of user core. Feb 9 19:06:21.831577 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:06:21.831937 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:06:21.858856 dbus-daemon[1353]: \xd0M\x97\xe6\xb0U: received setenforce notice (enforcing=-515456240) Feb 9 19:06:21.861391 sudo[1750]: pam_unix(sudo:session): session closed for user root Feb 9 19:06:21.976850 sshd[1743]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:21.980619 systemd[1]: sshd@4-10.200.8.48:22-10.200.12.6:51068.service: Deactivated successfully. Feb 9 19:06:21.982234 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:06:21.982251 systemd-logind[1365]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:06:21.983814 systemd-logind[1365]: Removed session 7. Feb 9 19:06:22.078013 systemd[1]: Started sshd@5-10.200.8.48:22-10.200.12.6:51074.service. Feb 9 19:06:22.694610 sshd[1754]: Accepted publickey for core from 10.200.12.6 port 51074 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:22.696446 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:22.701456 systemd-logind[1365]: New session 8 of user core. Feb 9 19:06:22.701732 systemd[1]: Started session-8.scope. Feb 9 19:06:23.034280 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:06:23.034645 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:06:23.037667 sudo[1759]: pam_unix(sudo:session): session closed for user root Feb 9 19:06:23.042188 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:06:23.042466 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:06:23.051451 systemd[1]: Stopping audit-rules.service... Feb 9 19:06:23.051000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:06:23.053515 auditctl[1762]: No rules Feb 9 19:06:23.053944 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:06:23.054153 systemd[1]: Stopped audit-rules.service. Feb 9 19:06:23.056123 systemd[1]: Starting audit-rules.service... Feb 9 19:06:23.062391 kernel: audit: type=1305 audit(1707505583.051:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:06:23.051000 audit[1762]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd11c8d70 a2=420 a3=0 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:23.051000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:06:23.083525 kernel: audit: type=1300 audit(1707505583.051:137): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd11c8d70 a2=420 a3=0 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:23.083577 kernel: audit: type=1327 audit(1707505583.051:137): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:06:23.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.087107 augenrules[1780]: No rules Feb 9 19:06:23.088037 systemd[1]: Finished audit-rules.service. Feb 9 19:06:23.089093 sudo[1758]: pam_unix(sudo:session): session closed for user root Feb 9 19:06:23.094630 kernel: audit: type=1131 audit(1707505583.052:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.105512 kernel: audit: type=1130 audit(1707505583.082:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.087000 audit[1758]: USER_END pid=1758 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.106434 kernel: audit: type=1106 audit(1707505583.087:140): pid=1758 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.087000 audit[1758]: CRED_DISP pid=1758 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.119444 kernel: audit: type=1104 audit(1707505583.087:141): pid=1758 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.195022 sshd[1754]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:23.195000 audit[1754]: USER_END pid=1754 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.198293 systemd[1]: sshd@5-10.200.8.48:22-10.200.12.6:51074.service: Deactivated successfully. Feb 9 19:06:23.199150 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:06:23.205358 systemd-logind[1365]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:06:23.206286 systemd-logind[1365]: Removed session 8. Feb 9 19:06:23.195000 audit[1754]: CRED_DISP pid=1754 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.222498 kernel: audit: type=1106 audit(1707505583.195:142): pid=1754 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.222563 kernel: audit: type=1104 audit(1707505583.195:143): pid=1754 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.222589 kernel: audit: type=1131 audit(1707505583.195:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.48:22-10.200.12.6:51074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.48:22-10.200.12.6:51074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.313614 systemd[1]: Started sshd@6-10.200.8.48:22-10.200.12.6:51088.service. Feb 9 19:06:23.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:51088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:23.930000 audit[1787]: USER_ACCT pid=1787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.931924 sshd[1787]: Accepted publickey for core from 10.200.12.6 port 51088 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:23.931000 audit[1787]: CRED_ACQ pid=1787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.931000 audit[1787]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5ad81e70 a2=3 a3=0 items=0 ppid=1 pid=1787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:23.931000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:23.933368 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:23.938189 systemd[1]: Started session-9.scope. Feb 9 19:06:23.938510 systemd-logind[1365]: New session 9 of user core. Feb 9 19:06:23.942000 audit[1787]: USER_START pid=1787 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:23.944000 audit[1790]: CRED_ACQ pid=1790 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:24.268000 audit[1791]: USER_ACCT pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:24.268000 audit[1791]: CRED_REFR pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:24.269866 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:06:24.270133 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:06:24.270000 audit[1791]: USER_START pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:25.003453 systemd[1]: Reloading. Feb 9 19:06:25.104489 /usr/lib/systemd/system-generators/torcx-generator[1820]: time="2024-02-09T19:06:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:06:25.104556 /usr/lib/systemd/system-generators/torcx-generator[1820]: time="2024-02-09T19:06:25Z" level=info msg="torcx already run" Feb 9 19:06:25.184320 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:06:25.184340 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:06:25.202458 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:06:25.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:25.281364 systemd[1]: Started kubelet.service. Feb 9 19:06:25.317053 systemd[1]: Starting coreos-metadata.service... Feb 9 19:06:25.351278 kubelet[1888]: E0209 19:06:25.351204 1888 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:06:25.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:06:25.353405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:06:25.353623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:06:25.373394 coreos-metadata[1896]: Feb 09 19:06:25.373 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:06:25.376648 coreos-metadata[1896]: Feb 09 19:06:25.376 INFO Fetch successful Feb 9 19:06:25.377268 coreos-metadata[1896]: Feb 09 19:06:25.377 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 19:06:25.380069 coreos-metadata[1896]: Feb 09 19:06:25.380 INFO Fetch successful Feb 9 19:06:25.380753 coreos-metadata[1896]: Feb 09 19:06:25.380 INFO Fetching http://168.63.129.16/machine/7fa85cf7-af5d-4b62-a1e0-669e3515fdb6/dbe01dbd%2D1229%2D4c77%2Dad31%2D7d1fb85e0eae.%5Fci%2D3510.3.2%2Da%2D2a68512ec5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 19:06:25.382111 coreos-metadata[1896]: Feb 09 19:06:25.382 INFO Fetch successful Feb 9 19:06:25.413899 coreos-metadata[1896]: Feb 09 19:06:25.413 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:06:25.427961 coreos-metadata[1896]: Feb 09 19:06:25.427 INFO Fetch successful Feb 9 19:06:25.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:25.441914 systemd[1]: Finished coreos-metadata.service. Feb 9 19:06:28.817670 systemd[1]: Stopped kubelet.service. Feb 9 19:06:28.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:28.824609 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 19:06:28.824689 kernel: audit: type=1130 audit(1707505588.816:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:28.842311 kernel: audit: type=1131 audit(1707505588.817:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:28.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:28.837013 systemd[1]: Reloading. Feb 9 19:06:28.921846 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-02-09T19:06:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:06:28.922329 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-02-09T19:06:28Z" level=info msg="torcx already run" Feb 9 19:06:29.017966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:06:29.017987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:06:29.035921 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:06:29.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:29.122116 systemd[1]: Started kubelet.service. Feb 9 19:06:29.135404 kernel: audit: type=1130 audit(1707505589.120:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:29.172036 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:06:29.172404 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:06:29.172555 kubelet[2026]: I0209 19:06:29.172526 2026 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:06:29.173763 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:06:29.173842 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:06:29.587693 kubelet[2026]: I0209 19:06:29.587652 2026 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:06:29.587693 kubelet[2026]: I0209 19:06:29.587681 2026 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:06:29.587974 kubelet[2026]: I0209 19:06:29.587954 2026 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:06:29.590432 kubelet[2026]: I0209 19:06:29.590409 2026 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:06:29.593466 kubelet[2026]: I0209 19:06:29.593445 2026 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:06:29.593875 kubelet[2026]: I0209 19:06:29.593860 2026 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:06:29.593970 kubelet[2026]: I0209 19:06:29.593951 2026 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:06:29.594115 kubelet[2026]: I0209 19:06:29.593986 2026 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:06:29.594115 kubelet[2026]: I0209 19:06:29.594004 2026 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:06:29.594200 kubelet[2026]: I0209 19:06:29.594133 2026 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:06:29.597757 kubelet[2026]: I0209 19:06:29.597739 2026 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:06:29.597853 kubelet[2026]: I0209 19:06:29.597763 2026 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:06:29.597853 kubelet[2026]: I0209 19:06:29.597795 2026 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:06:29.597853 kubelet[2026]: I0209 19:06:29.597816 2026 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:06:29.598326 kubelet[2026]: E0209 19:06:29.598305 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:29.598486 kubelet[2026]: E0209 19:06:29.598473 2026 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:29.598838 kubelet[2026]: I0209 19:06:29.598823 2026 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:06:29.599172 kubelet[2026]: W0209 19:06:29.599158 2026 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:06:29.600405 kubelet[2026]: I0209 19:06:29.600390 2026 server.go:1186] "Started kubelet" Feb 9 19:06:29.600542 kubelet[2026]: I0209 19:06:29.600530 2026 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:06:29.601331 kubelet[2026]: I0209 19:06:29.601312 2026 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:06:29.603000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:06:29.610441 kubelet[2026]: I0209 19:06:29.610427 2026 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:06:29.610550 kubelet[2026]: I0209 19:06:29.610540 2026 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:06:29.610717 kubelet[2026]: I0209 19:06:29.610708 2026 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:06:29.612353 kubelet[2026]: I0209 19:06:29.612338 2026 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:06:29.612514 kubelet[2026]: I0209 19:06:29.612503 2026 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:06:29.615202 kubelet[2026]: W0209 19:06:29.615186 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:29.615305 kubelet[2026]: E0209 19:06:29.615297 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:29.615511 kubelet[2026]: E0209 19:06:29.615498 2026 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:29.615822 kubelet[2026]: E0209 19:06:29.615725 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b24749781f7f36", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 600354102, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 600354102, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.615989 kubelet[2026]: W0209 19:06:29.615976 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:29.616063 kubelet[2026]: E0209 19:06:29.616054 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:29.616158 kubelet[2026]: W0209 19:06:29.616148 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:29.616233 kubelet[2026]: E0209 19:06:29.616225 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:29.620386 kernel: audit: type=1400 audit(1707505589.603:160): avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:06:29.603000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:06:29.627958 kubelet[2026]: E0209 19:06:29.621341 2026 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:06:29.627958 kubelet[2026]: E0209 19:06:29.621359 2026 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:06:29.627958 kubelet[2026]: E0209 19:06:29.623970 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b24749795fe664", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 621352036, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 621352036, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.628390 kernel: audit: type=1401 audit(1707505589.603:160): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:06:29.603000 audit[2026]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ff4030 a1=c000fd01f8 a2=c000ff4000 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.649388 kernel: audit: type=1300 audit(1707505589.603:160): arch=c000003e syscall=188 success=no exit=-22 a0=c000ff4030 a1=c000fd01f8 a2=c000ff4000 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.603000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:06:29.668386 kernel: audit: type=1327 audit(1707505589.603:160): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:06:29.669289 kubelet[2026]: I0209 19:06:29.669275 2026 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:06:29.671047 kubelet[2026]: I0209 19:06:29.671029 2026 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:06:29.671174 kubelet[2026]: I0209 19:06:29.671163 2026 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:06:29.608000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:06:29.675444 kubelet[2026]: E0209 19:06:29.675367 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.678560 kubelet[2026]: I0209 19:06:29.678544 2026 policy_none.go:49] "None policy: Start" Feb 9 19:06:29.679090 kubelet[2026]: E0209 19:06:29.679033 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.679586 kubelet[2026]: I0209 19:06:29.679569 2026 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:06:29.679682 kubelet[2026]: I0209 19:06:29.679674 2026 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:06:29.608000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:06:29.691554 kubelet[2026]: I0209 19:06:29.691538 2026 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:06:29.691687 kubelet[2026]: I0209 19:06:29.691677 2026 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:06:29.691881 kubelet[2026]: I0209 19:06:29.691870 2026 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:06:29.694092 kernel: audit: type=1400 audit(1707505589.608:161): avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:06:29.694161 kernel: audit: type=1401 audit(1707505589.608:161): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:06:29.694239 kubelet[2026]: E0209 19:06:29.694072 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.608000 audit[2026]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000de6000 a1=c000fd0000 a2=c000ff45a0 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.694891 kubelet[2026]: E0209 19:06:29.694879 2026 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.48\" not found" Feb 9 19:06:29.696646 kubelet[2026]: E0209 19:06:29.696594 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497da69963", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 693094243, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 693094243, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.713921 kernel: audit: type=1300 audit(1707505589.608:161): arch=c000003e syscall=188 success=no exit=-22 a0=c000de6000 a1=c000fd0000 a2=c000ff45a0 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.608000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:06:29.613000 audit[2037]: NETFILTER_CFG table=mangle:8 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.613000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7b9c6e00 a2=0 a3=7ffc7b9c6dec items=0 ppid=2026 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:06:29.619000 audit[2038]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.619000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffcadc96df0 a2=0 a3=7ffcadc96ddc items=0 ppid=2026 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:06:29.619000 audit[2040]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.619000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee11f5fc0 a2=0 a3=7ffee11f5fac items=0 ppid=2026 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:06:29.624000 audit[2042]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.624000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc04f81090 a2=0 a3=7ffc04f8107c items=0 ppid=2026 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:06:29.671000 audit[2045]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.671000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe4c5bd3e0 a2=0 a3=7ffe4c5bd3cc items=0 ppid=2026 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:06:29.672000 audit[2047]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.672000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe15f755b0 a2=0 a3=7ffe15f7559c items=0 ppid=2026 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:06:29.686000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:06:29.686000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:06:29.686000 audit[2026]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011386c0 a1=c001132078 a2=c001138690 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.686000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:06:29.716134 kubelet[2026]: I0209 19:06:29.715728 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:29.717806 kubelet[2026]: E0209 19:06:29.717737 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 715663732, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.716000 audit[2052]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.716000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd73529240 a2=0 a3=7ffd7352922c items=0 ppid=2026 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:06:29.718557 kubelet[2026]: E0209 19:06:29.718540 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:29.719197 kubelet[2026]: E0209 19:06:29.719117 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 715672532, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.720194 kubelet[2026]: E0209 19:06:29.720135 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 715689333, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.738000 audit[2055]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.738000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffef61f39f0 a2=0 a3=7ffef61f39dc items=0 ppid=2026 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:06:29.739000 audit[2056]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.739000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9148eaf0 a2=0 a3=7ffd9148eadc items=0 ppid=2026 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:06:29.740000 audit[2057]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.740000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8fc67960 a2=0 a3=7ffd8fc6794c items=0 ppid=2026 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:06:29.742000 audit[2059]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.742000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff5486dd90 a2=0 a3=7fff5486dd7c items=0 ppid=2026 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:06:29.744000 audit[2061]: NETFILTER_CFG table=nat:19 family=2 entries=2 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.744000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffca82e2680 a2=0 a3=7ffca82e266c items=0 ppid=2026 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:06:29.782000 audit[2064]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.782000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd75235350 a2=0 a3=7ffd7523533c items=0 ppid=2026 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:06:29.784000 audit[2066]: NETFILTER_CFG table=nat:21 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.784000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd686a5e40 a2=0 a3=7ffd686a5e2c items=0 ppid=2026 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:06:29.818332 kubelet[2026]: E0209 19:06:29.818299 2026 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:29.867000 audit[2069]: NETFILTER_CFG table=nat:22 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.867000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffd693f4640 a2=0 a3=7ffd693f462c items=0 ppid=2026 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:06:29.869000 audit[2070]: NETFILTER_CFG table=mangle:23 family=10 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.869000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc905b5350 a2=0 a3=7ffc905b533c items=0 ppid=2026 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:06:29.870000 audit[2071]: NETFILTER_CFG table=mangle:24 family=2 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.870000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff708310 a2=0 a3=7ffcff7082fc items=0 ppid=2026 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:06:29.872000 audit[2072]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.872000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeaccd8bf0 a2=0 a3=7ffeaccd8bdc items=0 ppid=2026 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:06:29.875033 kubelet[2026]: I0209 19:06:29.869446 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:06:29.873000 audit[2073]: NETFILTER_CFG table=nat:26 family=2 entries=1 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.873000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe1fe2d20 a2=0 a3=7fffe1fe2d0c items=0 ppid=2026 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:06:29.875000 audit[2075]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:29.875000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffebfbeae0 a2=0 a3=7fffebfbeacc items=0 ppid=2026 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:06:29.876000 audit[2076]: NETFILTER_CFG table=nat:28 family=10 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.876000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd756a0470 a2=0 a3=7ffd756a045c items=0 ppid=2026 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:06:29.877000 audit[2077]: NETFILTER_CFG table=filter:29 family=10 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.877000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdc5621ff0 a2=0 a3=7ffdc5621fdc items=0 ppid=2026 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:06:29.879000 audit[2079]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.879000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff84a226d0 a2=0 a3=7fff84a226bc items=0 ppid=2026 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:06:29.880000 audit[2080]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.880000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3fafa350 a2=0 a3=7ffe3fafa33c items=0 ppid=2026 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:06:29.881000 audit[2081]: NETFILTER_CFG table=nat:32 family=10 entries=1 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.881000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd051ed30 a2=0 a3=7ffcd051ed1c items=0 ppid=2026 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:06:29.883000 audit[2083]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.883000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd0fefd990 a2=0 a3=7ffd0fefd97c items=0 ppid=2026 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:06:29.885000 audit[2085]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.885000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd9b05a6c0 a2=0 a3=7ffd9b05a6ac items=0 ppid=2026 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:06:29.887000 audit[2087]: NETFILTER_CFG table=nat:35 family=10 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.887000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd5c44e620 a2=0 a3=7ffd5c44e60c items=0 ppid=2026 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:06:29.889000 audit[2089]: NETFILTER_CFG table=nat:36 family=10 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.889000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffffcc71000 a2=0 a3=7ffffcc70fec items=0 ppid=2026 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:06:29.905000 audit[2091]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.905000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fffe3886e10 a2=0 a3=7fffe3886dfc items=0 ppid=2026 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:06:29.907922 kubelet[2026]: I0209 19:06:29.907896 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:06:29.908000 kubelet[2026]: I0209 19:06:29.907937 2026 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:06:29.908000 kubelet[2026]: I0209 19:06:29.907962 2026 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:06:29.908073 kubelet[2026]: E0209 19:06:29.908012 2026 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:06:29.908000 audit[2092]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.908000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb5762af0 a2=0 a3=7fffb5762adc items=0 ppid=2026 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:06:29.910118 kubelet[2026]: W0209 19:06:29.910089 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:29.910182 kubelet[2026]: E0209 19:06:29.910134 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:29.909000 audit[2093]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.909000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe330517b0 a2=0 a3=7ffe3305179c items=0 ppid=2026 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:06:29.910000 audit[2094]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:29.910000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc696b840 a2=0 a3=7ffcc696b82c items=0 ppid=2026 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:29.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:06:29.920077 kubelet[2026]: I0209 19:06:29.920058 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:29.921443 kubelet[2026]: E0209 19:06:29.921423 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:29.921552 kubelet[2026]: E0209 19:06:29.921433 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 920016174, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:29.922554 kubelet[2026]: E0209 19:06:29.922482 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 920020774, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:30.003452 kubelet[2026]: E0209 19:06:30.003310 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 920023074, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:30.220212 kubelet[2026]: E0209 19:06:30.220013 2026 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:30.323470 kubelet[2026]: I0209 19:06:30.323427 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:30.325179 kubelet[2026]: E0209 19:06:30.324985 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:30.325179 kubelet[2026]: E0209 19:06:30.324976 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 30, 323362060, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:30.403821 kubelet[2026]: E0209 19:06:30.403711 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 30, 323387960, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:30.474625 kubelet[2026]: W0209 19:06:30.474484 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:30.474625 kubelet[2026]: E0209 19:06:30.474536 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:30.599517 kubelet[2026]: E0209 19:06:30.599452 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:30.604091 kubelet[2026]: E0209 19:06:30.603992 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 30, 323392861, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:30.871413 kubelet[2026]: W0209 19:06:30.871347 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:30.871413 kubelet[2026]: E0209 19:06:30.871414 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:31.021942 kubelet[2026]: E0209 19:06:31.021889 2026 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:31.108512 kubelet[2026]: W0209 19:06:31.108464 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:31.108512 kubelet[2026]: E0209 19:06:31.108512 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:31.126723 kubelet[2026]: I0209 19:06:31.126603 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:31.128454 kubelet[2026]: E0209 19:06:31.128424 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:31.128634 kubelet[2026]: E0209 19:06:31.128422 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 31, 126550438, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:31.129611 kubelet[2026]: E0209 19:06:31.129539 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 31, 126563839, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:31.153402 kubelet[2026]: W0209 19:06:31.153353 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:31.153402 kubelet[2026]: E0209 19:06:31.153407 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:31.204418 kubelet[2026]: E0209 19:06:31.204289 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 31, 126568439, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:31.600681 kubelet[2026]: E0209 19:06:31.600594 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:32.601105 kubelet[2026]: E0209 19:06:32.601043 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:32.624173 kubelet[2026]: E0209 19:06:32.624107 2026 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:32.729732 kubelet[2026]: I0209 19:06:32.729688 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:32.731418 kubelet[2026]: E0209 19:06:32.731352 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:32.731553 kubelet[2026]: E0209 19:06:32.731346 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 32, 729634850, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:32.732749 kubelet[2026]: E0209 19:06:32.732603 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 32, 729650151, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:32.733868 kubelet[2026]: E0209 19:06:32.733797 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 32, 729655651, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:33.076652 kubelet[2026]: W0209 19:06:33.076606 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:33.076652 kubelet[2026]: E0209 19:06:33.076654 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:33.602003 kubelet[2026]: E0209 19:06:33.601939 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:33.653121 kubelet[2026]: W0209 19:06:33.653033 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:33.653121 kubelet[2026]: E0209 19:06:33.653118 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:33.956105 kubelet[2026]: W0209 19:06:33.955960 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:33.956105 kubelet[2026]: E0209 19:06:33.956008 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:34.313944 kubelet[2026]: W0209 19:06:34.313913 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:34.314129 kubelet[2026]: E0209 19:06:34.313969 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:06:34.603348 kubelet[2026]: E0209 19:06:34.603124 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:35.603846 kubelet[2026]: E0209 19:06:35.603776 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:35.826664 kubelet[2026]: E0209 19:06:35.826613 2026 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:06:35.932964 kubelet[2026]: I0209 19:06:35.932331 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:35.934528 kubelet[2026]: E0209 19:06:35.934501 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Feb 9 19:06:35.934686 kubelet[2026]: E0209 19:06:35.934545 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f6980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668505984, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 35, 932279059, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f6980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:35.935905 kubelet[2026]: E0209 19:06:35.935820 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f81b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668512184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 35, 932294159, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f81b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:35.937005 kubelet[2026]: E0209 19:06:35.936927 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.17b247497c2f8f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 6, 29, 668515584, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 6, 35, 932298959, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.17b247497c2f8f00" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:06:36.604582 kubelet[2026]: E0209 19:06:36.604520 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:37.605500 kubelet[2026]: E0209 19:06:37.605441 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:38.605939 kubelet[2026]: E0209 19:06:38.605841 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:38.869052 kubelet[2026]: W0209 19:06:38.868916 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:38.869052 kubelet[2026]: E0209 19:06:38.868964 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:06:38.887455 kubelet[2026]: W0209 19:06:38.887416 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:38.887455 kubelet[2026]: E0209 19:06:38.887457 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:06:39.102142 kubelet[2026]: W0209 19:06:39.102094 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:39.102142 kubelet[2026]: E0209 19:06:39.102143 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:06:39.590428 kubelet[2026]: I0209 19:06:39.590350 2026 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:06:39.606775 kubelet[2026]: E0209 19:06:39.606745 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:39.695915 kubelet[2026]: E0209 19:06:39.695874 2026 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.48\" not found" Feb 9 19:06:39.974612 kubelet[2026]: E0209 19:06:39.974484 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.48" not found Feb 9 19:06:40.607747 kubelet[2026]: E0209 19:06:40.607706 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:41.219435 kubelet[2026]: E0209 19:06:41.219394 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.48" not found Feb 9 19:06:41.607646 kubelet[2026]: I0209 19:06:41.607592 2026 apiserver.go:52] "Watching apiserver" Feb 9 19:06:41.608702 kubelet[2026]: E0209 19:06:41.608677 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:41.913780 kubelet[2026]: I0209 19:06:41.913658 2026 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:06:41.989010 kubelet[2026]: I0209 19:06:41.988955 2026 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:06:42.231507 kubelet[2026]: E0209 19:06:42.231350 2026 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.48\" not found" node="10.200.8.48" Feb 9 19:06:42.336176 kubelet[2026]: I0209 19:06:42.336141 2026 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Feb 9 19:06:42.609331 kubelet[2026]: E0209 19:06:42.609250 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:42.620473 kubelet[2026]: I0209 19:06:42.620444 2026 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.48" Feb 9 19:06:42.640962 kubelet[2026]: I0209 19:06:42.640585 2026 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:06:42.641103 env[1408]: time="2024-02-09T19:06:42.641042062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:06:42.641561 kubelet[2026]: I0209 19:06:42.641286 2026 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:06:42.657659 sudo[1791]: pam_unix(sudo:session): session closed for user root Feb 9 19:06:42.656000 audit[1791]: USER_END pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.663105 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:06:42.663186 kernel: audit: type=1106 audit(1707505602.656:196): pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.668186 kubelet[2026]: I0209 19:06:42.668165 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:42.661000 audit[1791]: CRED_DISP pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.683631 kubelet[2026]: I0209 19:06:42.683612 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:42.691464 kubelet[2026]: I0209 19:06:42.691444 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:42.691820 kubelet[2026]: E0209 19:06:42.691801 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:42.692426 kernel: audit: type=1104 audit(1707505602.661:197): pid=1791 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.760536 sshd[1787]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:42.760000 audit[1787]: USER_END pid=1787 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:42.763866 systemd[1]: sshd@6-10.200.8.48:22-10.200.12.6:51088.service: Deactivated successfully. Feb 9 19:06:42.764986 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:06:42.767129 systemd-logind[1365]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:06:42.768418 systemd-logind[1365]: Removed session 9. Feb 9 19:06:42.760000 audit[1787]: CRED_DISP pid=1787 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:42.797564 kernel: audit: type=1106 audit(1707505602.760:198): pid=1787 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:42.797658 kernel: audit: type=1104 audit(1707505602.760:199): pid=1787 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:42.797689 kernel: audit: type=1131 audit(1707505602.762:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:51088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:51088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.797956 kubelet[2026]: I0209 19:06:42.797939 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-cni-log-dir\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.798053 kubelet[2026]: I0209 19:06:42.798045 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/80a02ef3-a462-4213-84a3-0d0df5da60f3-varrun\") pod \"csi-node-driver-kjz4d\" (UID: \"80a02ef3-a462-4213-84a3-0d0df5da60f3\") " pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:06:42.798122 kubelet[2026]: I0209 19:06:42.798115 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptn5\" (UniqueName: \"kubernetes.io/projected/80a02ef3-a462-4213-84a3-0d0df5da60f3-kube-api-access-hptn5\") pod \"csi-node-driver-kjz4d\" (UID: \"80a02ef3-a462-4213-84a3-0d0df5da60f3\") " pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:06:42.798183 kubelet[2026]: I0209 19:06:42.798177 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6caccba3-d014-4eef-9632-15c07ec65601-kube-proxy\") pod \"kube-proxy-ppvtw\" (UID: \"6caccba3-d014-4eef-9632-15c07ec65601\") " pod="kube-system/kube-proxy-ppvtw" Feb 9 19:06:42.798257 kubelet[2026]: I0209 19:06:42.798250 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8z88\" (UniqueName: \"kubernetes.io/projected/6caccba3-d014-4eef-9632-15c07ec65601-kube-api-access-r8z88\") pod \"kube-proxy-ppvtw\" (UID: \"6caccba3-d014-4eef-9632-15c07ec65601\") " pod="kube-system/kube-proxy-ppvtw" Feb 9 19:06:42.798324 kubelet[2026]: I0209 19:06:42.798317 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/742ea153-6aba-4872-9e49-edfbed07350a-node-certs\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.798480 kubelet[2026]: I0209 19:06:42.798471 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswks\" (UniqueName: \"kubernetes.io/projected/742ea153-6aba-4872-9e49-edfbed07350a-kube-api-access-tswks\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.798576 kubelet[2026]: I0209 19:06:42.798566 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80a02ef3-a462-4213-84a3-0d0df5da60f3-socket-dir\") pod \"csi-node-driver-kjz4d\" (UID: \"80a02ef3-a462-4213-84a3-0d0df5da60f3\") " pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:06:42.798649 kubelet[2026]: I0209 19:06:42.798642 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80a02ef3-a462-4213-84a3-0d0df5da60f3-registration-dir\") pod \"csi-node-driver-kjz4d\" (UID: \"80a02ef3-a462-4213-84a3-0d0df5da60f3\") " pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:06:42.798727 kubelet[2026]: I0209 19:06:42.798718 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6caccba3-d014-4eef-9632-15c07ec65601-xtables-lock\") pod \"kube-proxy-ppvtw\" (UID: \"6caccba3-d014-4eef-9632-15c07ec65601\") " pod="kube-system/kube-proxy-ppvtw" Feb 9 19:06:42.798792 kubelet[2026]: I0209 19:06:42.798785 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-policysync\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.798855 kubelet[2026]: I0209 19:06:42.798848 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742ea153-6aba-4872-9e49-edfbed07350a-tigera-ca-bundle\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.798938 kubelet[2026]: I0209 19:06:42.798930 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-var-lib-calico\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799004 kubelet[2026]: I0209 19:06:42.798998 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80a02ef3-a462-4213-84a3-0d0df5da60f3-kubelet-dir\") pod \"csi-node-driver-kjz4d\" (UID: \"80a02ef3-a462-4213-84a3-0d0df5da60f3\") " pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:06:42.799073 kubelet[2026]: I0209 19:06:42.799066 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-flexvol-driver-host\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799151 kubelet[2026]: I0209 19:06:42.799141 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6caccba3-d014-4eef-9632-15c07ec65601-lib-modules\") pod \"kube-proxy-ppvtw\" (UID: \"6caccba3-d014-4eef-9632-15c07ec65601\") " pod="kube-system/kube-proxy-ppvtw" Feb 9 19:06:42.799220 kubelet[2026]: I0209 19:06:42.799213 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-lib-modules\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799285 kubelet[2026]: I0209 19:06:42.799277 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-xtables-lock\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799362 kubelet[2026]: I0209 19:06:42.799355 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-var-run-calico\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799725 kubelet[2026]: I0209 19:06:42.799714 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-cni-bin-dir\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:42.799805 kubelet[2026]: I0209 19:06:42.799785 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/742ea153-6aba-4872-9e49-edfbed07350a-cni-net-dir\") pod \"calico-node-p555g\" (UID: \"742ea153-6aba-4872-9e49-edfbed07350a\") " pod="calico-system/calico-node-p555g" Feb 9 19:06:43.001775 kubelet[2026]: E0209 19:06:43.001645 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.001775 kubelet[2026]: W0209 19:06:43.001673 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.001775 kubelet[2026]: E0209 19:06:43.001708 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.002667 kubelet[2026]: E0209 19:06:43.002646 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.002893 kubelet[2026]: W0209 19:06:43.002876 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.003031 kubelet[2026]: E0209 19:06:43.003018 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.003529 kubelet[2026]: E0209 19:06:43.003517 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.003680 kubelet[2026]: W0209 19:06:43.003667 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.003782 kubelet[2026]: E0209 19:06:43.003773 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.004137 kubelet[2026]: E0209 19:06:43.004120 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.004137 kubelet[2026]: W0209 19:06:43.004132 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.004278 kubelet[2026]: E0209 19:06:43.004148 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.004351 kubelet[2026]: E0209 19:06:43.004335 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.004351 kubelet[2026]: W0209 19:06:43.004349 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.004466 kubelet[2026]: E0209 19:06:43.004365 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.004581 kubelet[2026]: E0209 19:06:43.004567 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.004581 kubelet[2026]: W0209 19:06:43.004579 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.004659 kubelet[2026]: E0209 19:06:43.004594 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.105696 kubelet[2026]: E0209 19:06:43.105662 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.105696 kubelet[2026]: W0209 19:06:43.105685 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.105992 kubelet[2026]: E0209 19:06:43.105710 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.105992 kubelet[2026]: E0209 19:06:43.105992 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.106132 kubelet[2026]: W0209 19:06:43.106007 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.106132 kubelet[2026]: E0209 19:06:43.106027 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.106257 kubelet[2026]: E0209 19:06:43.106249 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.106317 kubelet[2026]: W0209 19:06:43.106260 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.106317 kubelet[2026]: E0209 19:06:43.106279 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.106562 kubelet[2026]: E0209 19:06:43.106540 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.106562 kubelet[2026]: W0209 19:06:43.106558 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.106690 kubelet[2026]: E0209 19:06:43.106577 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.106841 kubelet[2026]: E0209 19:06:43.106822 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.106841 kubelet[2026]: W0209 19:06:43.106837 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.106989 kubelet[2026]: E0209 19:06:43.106856 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.107117 kubelet[2026]: E0209 19:06:43.107100 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.107117 kubelet[2026]: W0209 19:06:43.107115 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.107216 kubelet[2026]: E0209 19:06:43.107133 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.208690 kubelet[2026]: E0209 19:06:43.208650 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.208690 kubelet[2026]: W0209 19:06:43.208675 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.208690 kubelet[2026]: E0209 19:06:43.208704 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.209065 kubelet[2026]: E0209 19:06:43.208976 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.209065 kubelet[2026]: W0209 19:06:43.208990 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.209065 kubelet[2026]: E0209 19:06:43.209009 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.209248 kubelet[2026]: E0209 19:06:43.209216 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.209248 kubelet[2026]: W0209 19:06:43.209227 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.209248 kubelet[2026]: E0209 19:06:43.209244 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.209483 kubelet[2026]: E0209 19:06:43.209472 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.209547 kubelet[2026]: W0209 19:06:43.209483 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.209547 kubelet[2026]: E0209 19:06:43.209501 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.209730 kubelet[2026]: E0209 19:06:43.209711 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.209730 kubelet[2026]: W0209 19:06:43.209725 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.209873 kubelet[2026]: E0209 19:06:43.209744 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.209989 kubelet[2026]: E0209 19:06:43.209971 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.209989 kubelet[2026]: W0209 19:06:43.209985 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.210086 kubelet[2026]: E0209 19:06:43.210004 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.222664 kubelet[2026]: E0209 19:06:43.222645 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.222664 kubelet[2026]: W0209 19:06:43.222658 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.222804 kubelet[2026]: E0209 19:06:43.222674 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.311310 kubelet[2026]: E0209 19:06:43.311283 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.311310 kubelet[2026]: W0209 19:06:43.311307 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.311613 kubelet[2026]: E0209 19:06:43.311334 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.311682 kubelet[2026]: E0209 19:06:43.311629 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.311682 kubelet[2026]: W0209 19:06:43.311642 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.311682 kubelet[2026]: E0209 19:06:43.311663 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.311926 kubelet[2026]: E0209 19:06:43.311903 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.311926 kubelet[2026]: W0209 19:06:43.311923 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.312066 kubelet[2026]: E0209 19:06:43.311943 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.312203 kubelet[2026]: E0209 19:06:43.312183 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.312203 kubelet[2026]: W0209 19:06:43.312199 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.312366 kubelet[2026]: E0209 19:06:43.312217 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.312501 kubelet[2026]: E0209 19:06:43.312481 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.312501 kubelet[2026]: W0209 19:06:43.312496 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.312611 kubelet[2026]: E0209 19:06:43.312515 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.413257 kubelet[2026]: E0209 19:06:43.413229 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.413257 kubelet[2026]: W0209 19:06:43.413249 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.413540 kubelet[2026]: E0209 19:06:43.413275 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.413686 kubelet[2026]: E0209 19:06:43.413558 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.413686 kubelet[2026]: W0209 19:06:43.413571 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.413686 kubelet[2026]: E0209 19:06:43.413591 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.413877 kubelet[2026]: E0209 19:06:43.413813 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.413877 kubelet[2026]: W0209 19:06:43.413824 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.413877 kubelet[2026]: E0209 19:06:43.413842 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.414052 kubelet[2026]: E0209 19:06:43.414045 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.414107 kubelet[2026]: W0209 19:06:43.414056 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.414107 kubelet[2026]: E0209 19:06:43.414073 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.414292 kubelet[2026]: E0209 19:06:43.414275 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.414292 kubelet[2026]: W0209 19:06:43.414289 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.414423 kubelet[2026]: E0209 19:06:43.414306 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.515804 kubelet[2026]: E0209 19:06:43.515769 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.515804 kubelet[2026]: W0209 19:06:43.515792 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.515804 kubelet[2026]: E0209 19:06:43.515817 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.516146 kubelet[2026]: E0209 19:06:43.516070 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.516146 kubelet[2026]: W0209 19:06:43.516083 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.516146 kubelet[2026]: E0209 19:06:43.516102 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.516348 kubelet[2026]: E0209 19:06:43.516309 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.516348 kubelet[2026]: W0209 19:06:43.516321 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.516348 kubelet[2026]: E0209 19:06:43.516338 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.516615 kubelet[2026]: E0209 19:06:43.516596 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.516615 kubelet[2026]: W0209 19:06:43.516611 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.516827 kubelet[2026]: E0209 19:06:43.516630 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.516895 kubelet[2026]: E0209 19:06:43.516841 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.516895 kubelet[2026]: W0209 19:06:43.516852 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.516895 kubelet[2026]: E0209 19:06:43.516870 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.609555 kubelet[2026]: E0209 19:06:43.609406 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:43.617302 kubelet[2026]: E0209 19:06:43.617275 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.617302 kubelet[2026]: W0209 19:06:43.617298 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.617553 kubelet[2026]: E0209 19:06:43.617323 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.617632 kubelet[2026]: E0209 19:06:43.617619 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.617697 kubelet[2026]: W0209 19:06:43.617633 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.617697 kubelet[2026]: E0209 19:06:43.617653 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.617909 kubelet[2026]: E0209 19:06:43.617888 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.617909 kubelet[2026]: W0209 19:06:43.617904 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.618085 kubelet[2026]: E0209 19:06:43.617928 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.618195 kubelet[2026]: E0209 19:06:43.618178 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.618195 kubelet[2026]: W0209 19:06:43.618192 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.618317 kubelet[2026]: E0209 19:06:43.618211 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.618500 kubelet[2026]: E0209 19:06:43.618481 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.618500 kubelet[2026]: W0209 19:06:43.618496 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.618617 kubelet[2026]: E0209 19:06:43.618514 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.719814 kubelet[2026]: E0209 19:06:43.719777 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.719814 kubelet[2026]: W0209 19:06:43.719802 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.719814 kubelet[2026]: E0209 19:06:43.719829 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.720179 kubelet[2026]: E0209 19:06:43.720074 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.720179 kubelet[2026]: W0209 19:06:43.720087 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.720179 kubelet[2026]: E0209 19:06:43.720107 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.720366 kubelet[2026]: E0209 19:06:43.720314 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.720366 kubelet[2026]: W0209 19:06:43.720324 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.720366 kubelet[2026]: E0209 19:06:43.720342 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.720617 kubelet[2026]: E0209 19:06:43.720587 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.720617 kubelet[2026]: W0209 19:06:43.720609 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.720757 kubelet[2026]: E0209 19:06:43.720630 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.720876 kubelet[2026]: E0209 19:06:43.720856 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.720876 kubelet[2026]: W0209 19:06:43.720874 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.720979 kubelet[2026]: E0209 19:06:43.720893 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.816360 kubelet[2026]: I0209 19:06:43.816306 2026 request.go:690] Waited for 1.131936647s due to client-side throttling, not priority and fairness, request: GET:https://10.200.8.37:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&limit=500&resourceVersion=0 Feb 9 19:06:43.822787 kubelet[2026]: E0209 19:06:43.822135 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.822787 kubelet[2026]: W0209 19:06:43.822158 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.822787 kubelet[2026]: E0209 19:06:43.822187 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.822787 kubelet[2026]: E0209 19:06:43.822489 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.822787 kubelet[2026]: W0209 19:06:43.822503 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.822787 kubelet[2026]: E0209 19:06:43.822522 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.822787 kubelet[2026]: E0209 19:06:43.822794 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.823321 kubelet[2026]: W0209 19:06:43.822805 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.823321 kubelet[2026]: E0209 19:06:43.822823 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.823321 kubelet[2026]: E0209 19:06:43.823063 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.823321 kubelet[2026]: W0209 19:06:43.823085 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.823321 kubelet[2026]: E0209 19:06:43.823102 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.823542 kubelet[2026]: E0209 19:06:43.823332 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.823542 kubelet[2026]: W0209 19:06:43.823343 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.823542 kubelet[2026]: E0209 19:06:43.823381 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.831614 kubelet[2026]: E0209 19:06:43.831595 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.831614 kubelet[2026]: W0209 19:06:43.831609 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.831738 kubelet[2026]: E0209 19:06:43.831634 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.902990 kubelet[2026]: E0209 19:06:43.902355 2026 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:43.902990 kubelet[2026]: E0209 19:06:43.902494 2026 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/742ea153-6aba-4872-9e49-edfbed07350a-tigera-ca-bundle podName:742ea153-6aba-4872-9e49-edfbed07350a nodeName:}" failed. No retries permitted until 2024-02-09 19:06:44.402448619 +0000 UTC m=+15.272717867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/742ea153-6aba-4872-9e49-edfbed07350a-tigera-ca-bundle") pod "calico-node-p555g" (UID: "742ea153-6aba-4872-9e49-edfbed07350a") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:43.924062 kubelet[2026]: E0209 19:06:43.924037 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.924062 kubelet[2026]: W0209 19:06:43.924055 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.924307 kubelet[2026]: E0209 19:06:43.924078 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.924359 kubelet[2026]: E0209 19:06:43.924309 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.924359 kubelet[2026]: W0209 19:06:43.924322 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.924359 kubelet[2026]: E0209 19:06:43.924341 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.924553 kubelet[2026]: E0209 19:06:43.924542 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.924553 kubelet[2026]: W0209 19:06:43.924552 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.924645 kubelet[2026]: E0209 19:06:43.924566 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:43.924751 kubelet[2026]: E0209 19:06:43.924736 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:43.924751 kubelet[2026]: W0209 19:06:43.924748 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:43.924851 kubelet[2026]: E0209 19:06:43.924764 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.025761 kubelet[2026]: E0209 19:06:44.025728 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.025761 kubelet[2026]: W0209 19:06:44.025749 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.025761 kubelet[2026]: E0209 19:06:44.025770 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.026079 kubelet[2026]: E0209 19:06:44.026018 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.026079 kubelet[2026]: W0209 19:06:44.026029 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.026079 kubelet[2026]: E0209 19:06:44.026044 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.026237 kubelet[2026]: E0209 19:06:44.026228 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.026283 kubelet[2026]: W0209 19:06:44.026237 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.026283 kubelet[2026]: E0209 19:06:44.026253 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.026501 kubelet[2026]: E0209 19:06:44.026482 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.026501 kubelet[2026]: W0209 19:06:44.026496 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.026645 kubelet[2026]: E0209 19:06:44.026511 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.128007 kubelet[2026]: E0209 19:06:44.127969 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.128007 kubelet[2026]: W0209 19:06:44.127997 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.128336 kubelet[2026]: E0209 19:06:44.128023 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.128336 kubelet[2026]: E0209 19:06:44.128320 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.128336 kubelet[2026]: W0209 19:06:44.128333 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.128587 kubelet[2026]: E0209 19:06:44.128353 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.128650 kubelet[2026]: E0209 19:06:44.128643 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.128706 kubelet[2026]: W0209 19:06:44.128664 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.128706 kubelet[2026]: E0209 19:06:44.128685 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.128951 kubelet[2026]: E0209 19:06:44.128925 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.128951 kubelet[2026]: W0209 19:06:44.128941 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.129080 kubelet[2026]: E0209 19:06:44.128962 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.231047 kubelet[2026]: E0209 19:06:44.229946 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.231255 kubelet[2026]: W0209 19:06:44.231228 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.231343 kubelet[2026]: E0209 19:06:44.231264 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.231572 kubelet[2026]: E0209 19:06:44.231547 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.231572 kubelet[2026]: W0209 19:06:44.231561 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.231736 kubelet[2026]: E0209 19:06:44.231577 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.231795 kubelet[2026]: E0209 19:06:44.231777 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.231848 kubelet[2026]: W0209 19:06:44.231795 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.231848 kubelet[2026]: E0209 19:06:44.231809 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.232002 kubelet[2026]: E0209 19:06:44.231984 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.232002 kubelet[2026]: W0209 19:06:44.231997 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.232097 kubelet[2026]: E0209 19:06:44.232012 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.333201 kubelet[2026]: E0209 19:06:44.333153 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.333201 kubelet[2026]: W0209 19:06:44.333190 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.333573 kubelet[2026]: E0209 19:06:44.333220 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.333573 kubelet[2026]: E0209 19:06:44.333509 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.333573 kubelet[2026]: W0209 19:06:44.333523 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.333573 kubelet[2026]: E0209 19:06:44.333545 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.333827 kubelet[2026]: E0209 19:06:44.333769 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.333827 kubelet[2026]: W0209 19:06:44.333780 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.333827 kubelet[2026]: E0209 19:06:44.333798 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.334019 kubelet[2026]: E0209 19:06:44.334011 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.334079 kubelet[2026]: W0209 19:06:44.334021 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.334079 kubelet[2026]: E0209 19:06:44.334039 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.434514 kubelet[2026]: E0209 19:06:44.434476 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.434718 kubelet[2026]: W0209 19:06:44.434695 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.434901 kubelet[2026]: E0209 19:06:44.434725 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.435059 kubelet[2026]: E0209 19:06:44.435041 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.435144 kubelet[2026]: W0209 19:06:44.435064 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.435144 kubelet[2026]: E0209 19:06:44.435088 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.435367 kubelet[2026]: E0209 19:06:44.435351 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.435471 kubelet[2026]: W0209 19:06:44.435364 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.435471 kubelet[2026]: E0209 19:06:44.435411 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.435639 kubelet[2026]: E0209 19:06:44.435625 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.435639 kubelet[2026]: W0209 19:06:44.435636 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.435771 kubelet[2026]: E0209 19:06:44.435655 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.435856 kubelet[2026]: E0209 19:06:44.435839 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.435856 kubelet[2026]: W0209 19:06:44.435852 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.435979 kubelet[2026]: E0209 19:06:44.435867 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.436055 kubelet[2026]: E0209 19:06:44.436039 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.436055 kubelet[2026]: W0209 19:06:44.436051 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.436185 kubelet[2026]: E0209 19:06:44.436078 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.436247 kubelet[2026]: E0209 19:06:44.436234 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.436294 kubelet[2026]: W0209 19:06:44.436248 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.436294 kubelet[2026]: E0209 19:06:44.436263 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.436543 kubelet[2026]: E0209 19:06:44.436510 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.436543 kubelet[2026]: W0209 19:06:44.436538 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.436681 kubelet[2026]: E0209 19:06:44.436554 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.437328 kubelet[2026]: E0209 19:06:44.437309 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.437328 kubelet[2026]: W0209 19:06:44.437322 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.437461 kubelet[2026]: E0209 19:06:44.437338 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537091 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.538418 kubelet[2026]: W0209 19:06:44.537114 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537139 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537434 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.538418 kubelet[2026]: W0209 19:06:44.537447 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537468 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537678 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.538418 kubelet[2026]: W0209 19:06:44.537688 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.538418 kubelet[2026]: E0209 19:06:44.537705 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.609597 kubelet[2026]: E0209 19:06:44.609544 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:44.631501 kubelet[2026]: E0209 19:06:44.631470 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.632737 kubelet[2026]: W0209 19:06:44.632703 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.632852 kubelet[2026]: E0209 19:06:44.632741 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.638770 kubelet[2026]: E0209 19:06:44.638752 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.638770 kubelet[2026]: W0209 19:06:44.638766 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.638948 kubelet[2026]: E0209 19:06:44.638784 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.639017 kubelet[2026]: E0209 19:06:44.639003 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.639068 kubelet[2026]: W0209 19:06:44.639019 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.639068 kubelet[2026]: E0209 19:06:44.639035 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.740354 kubelet[2026]: E0209 19:06:44.740319 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.740354 kubelet[2026]: W0209 19:06:44.740345 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.740684 kubelet[2026]: E0209 19:06:44.740397 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.740684 kubelet[2026]: E0209 19:06:44.740674 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.740791 kubelet[2026]: W0209 19:06:44.740687 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.740791 kubelet[2026]: E0209 19:06:44.740707 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.789204 env[1408]: time="2024-02-09T19:06:44.789071333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p555g,Uid:742ea153-6aba-4872-9e49-edfbed07350a,Namespace:calico-system,Attempt:0,}" Feb 9 19:06:44.830299 kubelet[2026]: E0209 19:06:44.830274 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.830299 kubelet[2026]: W0209 19:06:44.830293 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.830491 kubelet[2026]: E0209 19:06:44.830318 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.841563 kubelet[2026]: E0209 19:06:44.841539 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.841563 kubelet[2026]: W0209 19:06:44.841555 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.841716 kubelet[2026]: E0209 19:06:44.841576 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:44.909185 kubelet[2026]: E0209 19:06:44.909154 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:44.942188 kubelet[2026]: E0209 19:06:44.942163 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:44.942188 kubelet[2026]: W0209 19:06:44.942182 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:44.942411 kubelet[2026]: E0209 19:06:44.942206 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:45.034932 kubelet[2026]: E0209 19:06:45.034904 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:45.035133 kubelet[2026]: W0209 19:06:45.035114 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:45.035255 kubelet[2026]: E0209 19:06:45.035242 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:45.071101 env[1408]: time="2024-02-09T19:06:45.071000326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppvtw,Uid:6caccba3-d014-4eef-9632-15c07ec65601,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:45.610678 kubelet[2026]: E0209 19:06:45.610614 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:45.641072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149936529.mount: Deactivated successfully. Feb 9 19:06:45.661996 env[1408]: time="2024-02-09T19:06:45.661947069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.665551 env[1408]: time="2024-02-09T19:06:45.665512049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.675990 env[1408]: time="2024-02-09T19:06:45.675953385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.679526 env[1408]: time="2024-02-09T19:06:45.679491965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.683037 env[1408]: time="2024-02-09T19:06:45.683002944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.686620 env[1408]: time="2024-02-09T19:06:45.686589225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.689554 env[1408]: time="2024-02-09T19:06:45.689524191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.692721 env[1408]: time="2024-02-09T19:06:45.692686263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:45.745989 env[1408]: time="2024-02-09T19:06:45.745903264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:45.746220 env[1408]: time="2024-02-09T19:06:45.746189771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:45.746345 env[1408]: time="2024-02-09T19:06:45.746308973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:45.746789 env[1408]: time="2024-02-09T19:06:45.746665681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b4e0f54100f1ef09d0cdd5b2005cb58203b41b8d3bc3ef09344b4d975e8d7e6 pid=2199 runtime=io.containerd.runc.v2 Feb 9 19:06:45.763763 env[1408]: time="2024-02-09T19:06:45.763692866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:45.763874 env[1408]: time="2024-02-09T19:06:45.763772668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:45.763874 env[1408]: time="2024-02-09T19:06:45.763799668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:45.763968 env[1408]: time="2024-02-09T19:06:45.763936971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde pid=2218 runtime=io.containerd.runc.v2 Feb 9 19:06:45.823165 env[1408]: time="2024-02-09T19:06:45.822559595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppvtw,Uid:6caccba3-d014-4eef-9632-15c07ec65601,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b4e0f54100f1ef09d0cdd5b2005cb58203b41b8d3bc3ef09344b4d975e8d7e6\"" Feb 9 19:06:45.825098 env[1408]: time="2024-02-09T19:06:45.825047151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p555g,Uid:742ea153-6aba-4872-9e49-edfbed07350a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\"" Feb 9 19:06:45.825224 env[1408]: time="2024-02-09T19:06:45.825057951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:06:46.611816 kubelet[2026]: E0209 19:06:46.611725 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:46.799271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818243948.mount: Deactivated successfully. Feb 9 19:06:46.909909 kubelet[2026]: E0209 19:06:46.909066 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:47.284911 env[1408]: time="2024-02-09T19:06:47.284490591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:47.289857 env[1408]: time="2024-02-09T19:06:47.289816805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:47.292828 env[1408]: time="2024-02-09T19:06:47.292790269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:47.295454 env[1408]: time="2024-02-09T19:06:47.295422626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:47.295875 env[1408]: time="2024-02-09T19:06:47.295842935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:06:47.297051 env[1408]: time="2024-02-09T19:06:47.297020560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:06:47.298473 env[1408]: time="2024-02-09T19:06:47.298443190Z" level=info msg="CreateContainer within sandbox \"5b4e0f54100f1ef09d0cdd5b2005cb58203b41b8d3bc3ef09344b4d975e8d7e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:06:47.331817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541780878.mount: Deactivated successfully. Feb 9 19:06:47.341225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869294280.mount: Deactivated successfully. Feb 9 19:06:47.350674 env[1408]: time="2024-02-09T19:06:47.350628608Z" level=info msg="CreateContainer within sandbox \"5b4e0f54100f1ef09d0cdd5b2005cb58203b41b8d3bc3ef09344b4d975e8d7e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e824ab0d413f81645c858a8afb90bde94bbf7b3993e6d98716cf1717b64b738e\"" Feb 9 19:06:47.351406 env[1408]: time="2024-02-09T19:06:47.351361424Z" level=info msg="StartContainer for \"e824ab0d413f81645c858a8afb90bde94bbf7b3993e6d98716cf1717b64b738e\"" Feb 9 19:06:47.414027 env[1408]: time="2024-02-09T19:06:47.412233429Z" level=info msg="StartContainer for \"e824ab0d413f81645c858a8afb90bde94bbf7b3993e6d98716cf1717b64b738e\" returns successfully" Feb 9 19:06:47.459000 audit[2336]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.461000 audit[2337]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.482132 kernel: audit: type=1325 audit(1707505607.459:201): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.482232 kernel: audit: type=1325 audit(1707505607.461:202): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.483396 kernel: audit: type=1300 audit(1707505607.461:202): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4735c220 a2=0 a3=7fff4735c20c items=0 ppid=2297 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.461000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4735c220 a2=0 a3=7fff4735c20c items=0 ppid=2297 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:06:47.512058 kernel: audit: type=1327 audit(1707505607.461:202): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:06:47.512120 kernel: audit: type=1325 audit(1707505607.462:203): table=nat:43 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.462000 audit[2338]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.462000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff225a80f0 a2=0 a3=7fff225a80dc items=0 ppid=2297 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:06:47.464000 audit[2339]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.464000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc60d925b0 a2=0 a3=7ffc60d9259c items=0 ppid=2297 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:06:47.459000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb9805580 a2=0 a3=7ffcb980556c items=0 ppid=2297 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:06:47.469000 audit[2340]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.469000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffacbc9000 a2=0 a3=7fffacbc8fec items=0 ppid=2297 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:06:47.471000 audit[2341]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.471000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe713e9c0 a2=0 a3=7fffe713e9ac items=0 ppid=2297 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:06:47.560000 audit[2342]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.560000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc768ab5d0 a2=0 a3=7ffc768ab5bc items=0 ppid=2297 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:06:47.564000 audit[2344]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.564000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe13222190 a2=0 a3=7ffe1322217c items=0 ppid=2297 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:06:47.569000 audit[2347]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.569000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc58adeb40 a2=0 a3=7ffc58adeb2c items=0 ppid=2297 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:06:47.570000 audit[2348]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.570000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc69016580 a2=0 a3=7ffc6901656c items=0 ppid=2297 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:06:47.573000 audit[2350]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.573000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1119aca0 a2=0 a3=7fff1119ac8c items=0 ppid=2297 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:06:47.574000 audit[2351]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.574000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdaa3f94e0 a2=0 a3=7ffdaa3f94cc items=0 ppid=2297 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:06:47.577000 audit[2353]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.577000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd514b6a60 a2=0 a3=7ffd514b6a4c items=0 ppid=2297 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:06:47.580000 audit[2356]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.580000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff310b69c0 a2=0 a3=7fff310b69ac items=0 ppid=2297 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:06:47.582000 audit[2357]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.582000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdda0c1800 a2=0 a3=7ffdda0c17ec items=0 ppid=2297 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:06:47.584000 audit[2359]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.584000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec04eada0 a2=0 a3=7ffec04ead8c items=0 ppid=2297 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:06:47.585000 audit[2360]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.585000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0a88b7c0 a2=0 a3=7ffe0a88b7ac items=0 ppid=2297 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:06:47.588000 audit[2362]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.588000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd932cfc00 a2=0 a3=7ffd932cfbec items=0 ppid=2297 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:06:47.591000 audit[2365]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.591000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc30f68c30 a2=0 a3=7ffc30f68c1c items=0 ppid=2297 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:06:47.594000 audit[2368]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.594000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6f3dc6a0 a2=0 a3=7ffd6f3dc68c items=0 ppid=2297 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:06:47.596000 audit[2369]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.596000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdbb7f010 a2=0 a3=7ffcdbb7effc items=0 ppid=2297 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:06:47.598000 audit[2371]: NETFILTER_CFG table=nat:62 family=2 entries=2 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.598000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffca4974c90 a2=0 a3=7ffca4974c7c items=0 ppid=2297 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:06:47.601000 audit[2374]: NETFILTER_CFG table=nat:63 family=2 entries=2 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:06:47.601000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdbf4f19f0 a2=0 a3=7ffdbf4f19dc items=0 ppid=2297 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:06:47.606000 audit[2378]: NETFILTER_CFG table=filter:64 family=2 entries=3 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:06:47.606000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff96c1d160 a2=0 a3=7fff96c1d14c items=0 ppid=2297 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:06:47.612813 kubelet[2026]: E0209 19:06:47.612766 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:47.677000 audit[2378]: NETFILTER_CFG table=nat:65 family=2 entries=68 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:06:47.682138 kernel: kauditd_printk_skb: 67 callbacks suppressed Feb 9 19:06:47.682231 kernel: audit: type=1325 audit(1707505607.677:225): table=nat:65 family=2 entries=68 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:06:47.677000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff96c1d160 a2=0 a3=7fff96c1d14c items=0 ppid=2297 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:06:47.722890 kernel: audit: type=1300 audit(1707505607.677:225): arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff96c1d160 a2=0 a3=7fff96c1d14c items=0 ppid=2297 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.723001 kernel: audit: type=1327 audit(1707505607.677:225): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:06:47.771000 audit[2385]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.771000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca420dbc0 a2=0 a3=7ffca420dbac items=0 ppid=2297 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.806999 kernel: audit: type=1325 audit(1707505607.771:226): table=filter:66 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.807140 kernel: audit: type=1300 audit(1707505607.771:226): arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca420dbc0 a2=0 a3=7ffca420dbac items=0 ppid=2297 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:06:47.771000 audit[2387]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.830153 kernel: audit: type=1327 audit(1707505607.771:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:06:47.830326 kernel: audit: type=1325 audit(1707505607.771:227): table=filter:67 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.771000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd433d9100 a2=0 a3=7ffd433d90ec items=0 ppid=2297 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:06:47.868348 kernel: audit: type=1300 audit(1707505607.771:227): arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd433d9100 a2=0 a3=7ffd433d90ec items=0 ppid=2297 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.868446 kernel: audit: type=1327 audit(1707505607.771:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:06:47.868477 kernel: audit: type=1325 audit(1707505607.776:228): table=filter:68 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.776000 audit[2390]: NETFILTER_CFG table=filter:68 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.776000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff816716f0 a2=0 a3=7fff816716dc items=0 ppid=2297 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:06:47.776000 audit[2391]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.776000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfb026ee0 a2=0 a3=7ffcfb026ecc items=0 ppid=2297 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:06:47.781000 audit[2393]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.781000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdacfc6430 a2=0 a3=7ffdacfc641c items=0 ppid=2297 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:06:47.784000 audit[2394]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.784000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb4c9de00 a2=0 a3=7ffeb4c9ddec items=0 ppid=2297 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:06:47.784000 audit[2396]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.784000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd61e09440 a2=0 a3=7ffd61e0942c items=0 ppid=2297 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:06:47.790000 audit[2399]: NETFILTER_CFG table=filter:73 family=10 entries=2 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.790000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd202eb9a0 a2=0 a3=7ffd202eb98c items=0 ppid=2297 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:06:47.790000 audit[2400]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.790000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd05208060 a2=0 a3=7ffd0520804c items=0 ppid=2297 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:06:47.795000 audit[2402]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.795000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7a3e9130 a2=0 a3=7ffc7a3e911c items=0 ppid=2297 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:06:47.795000 audit[2403]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.795000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc639dc5f0 a2=0 a3=7ffc639dc5dc items=0 ppid=2297 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:06:47.800000 audit[2405]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.800000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe61c25960 a2=0 a3=7ffe61c2594c items=0 ppid=2297 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:06:47.805000 audit[2408]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.805000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde3d72460 a2=0 a3=7ffde3d7244c items=0 ppid=2297 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:06:47.807000 audit[2411]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.807000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff71bbff00 a2=0 a3=7fff71bbfeec items=0 ppid=2297 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:06:47.807000 audit[2412]: NETFILTER_CFG table=nat:80 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.807000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7284b1b0 a2=0 a3=7ffd7284b19c items=0 ppid=2297 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:06:47.813000 audit[2414]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.813000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd36f1afa0 a2=0 a3=7ffd36f1af8c items=0 ppid=2297 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:06:47.818000 audit[2417]: NETFILTER_CFG table=nat:82 family=10 entries=2 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:06:47.818000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcfcbf13e0 a2=0 a3=7ffcfcbf13cc items=0 ppid=2297 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:06:47.825000 audit[2421]: NETFILTER_CFG table=filter:83 family=10 entries=3 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:06:47.825000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe2a9eb1d0 a2=0 a3=7ffe2a9eb1bc items=0 ppid=2297 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.825000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:06:47.878000 audit[2421]: NETFILTER_CFG table=nat:84 family=10 entries=10 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:06:47.878000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffe2a9eb1d0 a2=0 a3=7ffe2a9eb1bc items=0 ppid=2297 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:47.878000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:06:47.956395 kubelet[2026]: I0209 19:06:47.956338 2026 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ppvtw" podStartSLOduration=-9.223372030898481e+09 pod.CreationTimestamp="2024-02-09 19:06:42 +0000 UTC" firstStartedPulling="2024-02-09 19:06:45.823975027 +0000 UTC m=+16.694244375" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:47.956162684 +0000 UTC m=+18.826431932" watchObservedRunningTime="2024-02-09 19:06:47.956293586 +0000 UTC m=+18.826562934" Feb 9 19:06:47.962180 kubelet[2026]: E0209 19:06:47.962144 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.962180 kubelet[2026]: W0209 19:06:47.962170 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.962470 kubelet[2026]: E0209 19:06:47.962192 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.962470 kubelet[2026]: E0209 19:06:47.962460 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.962570 kubelet[2026]: W0209 19:06:47.962471 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.962570 kubelet[2026]: E0209 19:06:47.962487 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.962679 kubelet[2026]: E0209 19:06:47.962666 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.962679 kubelet[2026]: W0209 19:06:47.962676 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.962867 kubelet[2026]: E0209 19:06:47.962690 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.962915 kubelet[2026]: E0209 19:06:47.962898 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.962915 kubelet[2026]: W0209 19:06:47.962907 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963013 kubelet[2026]: E0209 19:06:47.962924 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.963095 kubelet[2026]: E0209 19:06:47.963077 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.963095 kubelet[2026]: W0209 19:06:47.963090 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963223 kubelet[2026]: E0209 19:06:47.963112 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.963294 kubelet[2026]: E0209 19:06:47.963278 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.963294 kubelet[2026]: W0209 19:06:47.963291 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963430 kubelet[2026]: E0209 19:06:47.963304 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.963530 kubelet[2026]: E0209 19:06:47.963516 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.963530 kubelet[2026]: W0209 19:06:47.963527 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963656 kubelet[2026]: E0209 19:06:47.963543 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.963702 kubelet[2026]: E0209 19:06:47.963696 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.963749 kubelet[2026]: W0209 19:06:47.963705 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963749 kubelet[2026]: E0209 19:06:47.963719 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.963876 kubelet[2026]: E0209 19:06:47.963860 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.963876 kubelet[2026]: W0209 19:06:47.963872 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.963986 kubelet[2026]: E0209 19:06:47.963892 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.964154 kubelet[2026]: E0209 19:06:47.964136 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.964154 kubelet[2026]: W0209 19:06:47.964150 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.964288 kubelet[2026]: E0209 19:06:47.964173 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.964382 kubelet[2026]: E0209 19:06:47.964352 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.964382 kubelet[2026]: W0209 19:06:47.964365 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.964505 kubelet[2026]: E0209 19:06:47.964389 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.964566 kubelet[2026]: E0209 19:06:47.964551 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.964613 kubelet[2026]: W0209 19:06:47.964568 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.964613 kubelet[2026]: E0209 19:06:47.964582 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.964781 kubelet[2026]: E0209 19:06:47.964766 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.964781 kubelet[2026]: W0209 19:06:47.964778 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.964919 kubelet[2026]: E0209 19:06:47.964792 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.965006 kubelet[2026]: E0209 19:06:47.964993 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.965006 kubelet[2026]: W0209 19:06:47.965004 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.965111 kubelet[2026]: E0209 19:06:47.965018 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.965234 kubelet[2026]: E0209 19:06:47.965220 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.965234 kubelet[2026]: W0209 19:06:47.965232 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.965366 kubelet[2026]: E0209 19:06:47.965246 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.965436 kubelet[2026]: E0209 19:06:47.965426 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.965476 kubelet[2026]: W0209 19:06:47.965435 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.965476 kubelet[2026]: E0209 19:06:47.965449 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.975489 kubelet[2026]: E0209 19:06:47.975473 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.975489 kubelet[2026]: W0209 19:06:47.975486 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.975649 kubelet[2026]: E0209 19:06:47.975501 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.975731 kubelet[2026]: E0209 19:06:47.975714 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.975731 kubelet[2026]: W0209 19:06:47.975728 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.975837 kubelet[2026]: E0209 19:06:47.975750 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.975957 kubelet[2026]: E0209 19:06:47.975944 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.975957 kubelet[2026]: W0209 19:06:47.975955 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.976082 kubelet[2026]: E0209 19:06:47.975975 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.976146 kubelet[2026]: E0209 19:06:47.976133 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.976205 kubelet[2026]: W0209 19:06:47.976147 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.976205 kubelet[2026]: E0209 19:06:47.976163 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.976324 kubelet[2026]: E0209 19:06:47.976310 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.976324 kubelet[2026]: W0209 19:06:47.976324 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.976437 kubelet[2026]: E0209 19:06:47.976341 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.976581 kubelet[2026]: E0209 19:06:47.976566 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.976581 kubelet[2026]: W0209 19:06:47.976578 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.976692 kubelet[2026]: E0209 19:06:47.976597 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.976951 kubelet[2026]: E0209 19:06:47.976936 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.976951 kubelet[2026]: W0209 19:06:47.976947 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.977079 kubelet[2026]: E0209 19:06:47.976966 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.977195 kubelet[2026]: E0209 19:06:47.977180 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.977195 kubelet[2026]: W0209 19:06:47.977193 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.977297 kubelet[2026]: E0209 19:06:47.977207 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.977437 kubelet[2026]: E0209 19:06:47.977423 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.977437 kubelet[2026]: W0209 19:06:47.977434 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.977564 kubelet[2026]: E0209 19:06:47.977456 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.977689 kubelet[2026]: E0209 19:06:47.977674 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.977689 kubelet[2026]: W0209 19:06:47.977686 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.977839 kubelet[2026]: E0209 19:06:47.977700 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.977926 kubelet[2026]: E0209 19:06:47.977910 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.977926 kubelet[2026]: W0209 19:06:47.977922 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.978213 kubelet[2026]: E0209 19:06:47.977946 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:47.978487 kubelet[2026]: E0209 19:06:47.978473 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:47.978487 kubelet[2026]: W0209 19:06:47.978484 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:47.978585 kubelet[2026]: E0209 19:06:47.978499 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.613068 kubelet[2026]: E0209 19:06:48.613012 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:48.908918 kubelet[2026]: E0209 19:06:48.908465 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:48.972892 kubelet[2026]: E0209 19:06:48.972858 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.972892 kubelet[2026]: W0209 19:06:48.972880 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.972892 kubelet[2026]: E0209 19:06:48.972907 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.973251 kubelet[2026]: E0209 19:06:48.973154 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.973251 kubelet[2026]: W0209 19:06:48.973167 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.973251 kubelet[2026]: E0209 19:06:48.973185 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.973775 kubelet[2026]: E0209 19:06:48.973751 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.973775 kubelet[2026]: W0209 19:06:48.973770 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.973951 kubelet[2026]: E0209 19:06:48.973794 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.974618 kubelet[2026]: E0209 19:06:48.974590 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.974712 kubelet[2026]: W0209 19:06:48.974631 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.974712 kubelet[2026]: E0209 19:06:48.974686 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980099 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.980748 kubelet[2026]: W0209 19:06:48.980134 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980155 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980345 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.980748 kubelet[2026]: W0209 19:06:48.980354 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980390 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980596 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.980748 kubelet[2026]: W0209 19:06:48.980605 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.980748 kubelet[2026]: E0209 19:06:48.980619 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.981565 kubelet[2026]: E0209 19:06:48.981172 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.981565 kubelet[2026]: W0209 19:06:48.981185 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.981565 kubelet[2026]: E0209 19:06:48.981209 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.981565 kubelet[2026]: E0209 19:06:48.981382 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.981565 kubelet[2026]: W0209 19:06:48.981392 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.981565 kubelet[2026]: E0209 19:06:48.981412 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.981960 kubelet[2026]: E0209 19:06:48.981948 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.982123 kubelet[2026]: W0209 19:06:48.982024 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.982123 kubelet[2026]: E0209 19:06:48.982044 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.982338 kubelet[2026]: E0209 19:06:48.982328 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.982422 kubelet[2026]: W0209 19:06:48.982410 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.982515 kubelet[2026]: E0209 19:06:48.982504 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.982755 kubelet[2026]: E0209 19:06:48.982746 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.982835 kubelet[2026]: W0209 19:06:48.982825 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.982912 kubelet[2026]: E0209 19:06:48.982904 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.983142 kubelet[2026]: E0209 19:06:48.983132 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.983215 kubelet[2026]: W0209 19:06:48.983206 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.983282 kubelet[2026]: E0209 19:06:48.983275 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.983535 kubelet[2026]: E0209 19:06:48.983524 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.983614 kubelet[2026]: W0209 19:06:48.983604 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.983695 kubelet[2026]: E0209 19:06:48.983687 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.983935 kubelet[2026]: E0209 19:06:48.983921 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.984013 kubelet[2026]: W0209 19:06:48.984002 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.984085 kubelet[2026]: E0209 19:06:48.984077 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.984305 kubelet[2026]: E0209 19:06:48.984295 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.984414 kubelet[2026]: W0209 19:06:48.984402 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.984552 kubelet[2026]: E0209 19:06:48.984533 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.984859 kubelet[2026]: E0209 19:06:48.984842 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.984859 kubelet[2026]: W0209 19:06:48.984854 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.984978 kubelet[2026]: E0209 19:06:48.984873 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.985268 kubelet[2026]: E0209 19:06:48.985254 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.985355 kubelet[2026]: W0209 19:06:48.985343 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.985458 kubelet[2026]: E0209 19:06:48.985448 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.985727 kubelet[2026]: E0209 19:06:48.985717 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.985801 kubelet[2026]: W0209 19:06:48.985792 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.985863 kubelet[2026]: E0209 19:06:48.985855 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.986094 kubelet[2026]: E0209 19:06:48.986079 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.986094 kubelet[2026]: W0209 19:06:48.986091 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.986224 kubelet[2026]: E0209 19:06:48.986112 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.986332 kubelet[2026]: E0209 19:06:48.986316 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.986332 kubelet[2026]: W0209 19:06:48.986328 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.986457 kubelet[2026]: E0209 19:06:48.986348 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.986655 kubelet[2026]: E0209 19:06:48.986638 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.986655 kubelet[2026]: W0209 19:06:48.986650 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.986765 kubelet[2026]: E0209 19:06:48.986665 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.987024 kubelet[2026]: E0209 19:06:48.987008 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.987024 kubelet[2026]: W0209 19:06:48.987020 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.987130 kubelet[2026]: E0209 19:06:48.987040 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.987258 kubelet[2026]: E0209 19:06:48.987243 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.987258 kubelet[2026]: W0209 19:06:48.987255 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.987433 kubelet[2026]: E0209 19:06:48.987270 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.987494 kubelet[2026]: E0209 19:06:48.987468 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.987494 kubelet[2026]: W0209 19:06:48.987478 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.987494 kubelet[2026]: E0209 19:06:48.987494 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.987655 kubelet[2026]: E0209 19:06:48.987638 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.987655 kubelet[2026]: W0209 19:06:48.987650 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.987758 kubelet[2026]: E0209 19:06:48.987664 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.987865 kubelet[2026]: E0209 19:06:48.987852 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.987865 kubelet[2026]: W0209 19:06:48.987863 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.987965 kubelet[2026]: E0209 19:06:48.987878 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:48.988185 kubelet[2026]: E0209 19:06:48.988170 2026 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:06:48.988185 kubelet[2026]: W0209 19:06:48.988181 2026 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:06:48.988320 kubelet[2026]: E0209 19:06:48.988198 2026 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:06:49.129588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889046586.mount: Deactivated successfully. Feb 9 19:06:49.597914 kubelet[2026]: E0209 19:06:49.597880 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:49.613169 kubelet[2026]: E0209 19:06:49.613129 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:50.319622 env[1408]: time="2024-02-09T19:06:50.319477793Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.330181 env[1408]: time="2024-02-09T19:06:50.330136105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.336701 env[1408]: time="2024-02-09T19:06:50.336651634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.341009 env[1408]: time="2024-02-09T19:06:50.340967220Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.341941 env[1408]: time="2024-02-09T19:06:50.341902238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:06:50.344164 env[1408]: time="2024-02-09T19:06:50.344130782Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:06:50.383209 env[1408]: time="2024-02-09T19:06:50.383104355Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba\"" Feb 9 19:06:50.383902 env[1408]: time="2024-02-09T19:06:50.383858670Z" level=info msg="StartContainer for \"8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba\"" Feb 9 19:06:50.454926 env[1408]: time="2024-02-09T19:06:50.454875379Z" level=info msg="StartContainer for \"8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba\" returns successfully" Feb 9 19:06:50.471929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba-rootfs.mount: Deactivated successfully. Feb 9 19:06:50.613822 kubelet[2026]: E0209 19:06:50.613597 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:51.251294 kubelet[2026]: E0209 19:06:50.909157 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:51.260693 env[1408]: time="2024-02-09T19:06:51.260637330Z" level=info msg="shim disconnected" id=8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba Feb 9 19:06:51.260851 env[1408]: time="2024-02-09T19:06:51.260707032Z" level=warning msg="cleaning up after shim disconnected" id=8d2b099dc5ae61b7e328abf13f59d53ef3aeffd5fedf5e00f849b08bb9f309ba namespace=k8s.io Feb 9 19:06:51.260851 env[1408]: time="2024-02-09T19:06:51.260721732Z" level=info msg="cleaning up dead shim" Feb 9 19:06:51.268737 env[1408]: time="2024-02-09T19:06:51.268694086Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2529 runtime=io.containerd.runc.v2\n" Feb 9 19:06:51.614711 kubelet[2026]: E0209 19:06:51.614651 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:51.962093 env[1408]: time="2024-02-09T19:06:51.961807387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:06:52.615252 kubelet[2026]: E0209 19:06:52.615201 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:52.908937 kubelet[2026]: E0209 19:06:52.908460 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:53.615447 kubelet[2026]: E0209 19:06:53.615384 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:53.814471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695000492.mount: Deactivated successfully. Feb 9 19:06:54.616276 kubelet[2026]: E0209 19:06:54.616238 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:54.909466 kubelet[2026]: E0209 19:06:54.908718 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:55.617258 kubelet[2026]: E0209 19:06:55.617204 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:56.618005 kubelet[2026]: E0209 19:06:56.617935 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:56.909835 kubelet[2026]: E0209 19:06:56.909116 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:57.587602 env[1408]: time="2024-02-09T19:06:57.587541948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:57.593136 env[1408]: time="2024-02-09T19:06:57.592653033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:57.597562 env[1408]: time="2024-02-09T19:06:57.597517514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:57.601424 env[1408]: time="2024-02-09T19:06:57.601395979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:57.601751 env[1408]: time="2024-02-09T19:06:57.601723884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:06:57.604075 env[1408]: time="2024-02-09T19:06:57.604040423Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:06:57.619026 kubelet[2026]: E0209 19:06:57.618986 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:57.633726 env[1408]: time="2024-02-09T19:06:57.633691417Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4\"" Feb 9 19:06:57.634254 env[1408]: time="2024-02-09T19:06:57.634221125Z" level=info msg="StartContainer for \"4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4\"" Feb 9 19:06:57.661979 systemd[1]: run-containerd-runc-k8s.io-4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4-runc.Esb37Q.mount: Deactivated successfully. Feb 9 19:06:57.696423 env[1408]: time="2024-02-09T19:06:57.696069456Z" level=info msg="StartContainer for \"4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4\" returns successfully" Feb 9 19:06:58.619734 kubelet[2026]: E0209 19:06:58.619689 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:58.909249 kubelet[2026]: E0209 19:06:58.909102 2026 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:06:59.387843 env[1408]: time="2024-02-09T19:06:59.387774729Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:06:59.410602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4-rootfs.mount: Deactivated successfully. Feb 9 19:06:59.431120 kubelet[2026]: I0209 19:06:59.431094 2026 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:06:59.620288 kubelet[2026]: E0209 19:06:59.620225 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:59.709495 env[1408]: time="2024-02-09T19:06:59.709244230Z" level=error msg="collecting metrics for 4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4" error="cgroups: cgroup deleted: unknown" Feb 9 19:07:01.042520 kubelet[2026]: E0209 19:07:00.621058 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:01.046710 env[1408]: time="2024-02-09T19:07:01.046658754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjz4d,Uid:80a02ef3-a462-4213-84a3-0d0df5da60f3,Namespace:calico-system,Attempt:0,}" Feb 9 19:07:01.060705 env[1408]: time="2024-02-09T19:07:01.060660066Z" level=info msg="shim disconnected" id=4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4 Feb 9 19:07:01.060826 env[1408]: time="2024-02-09T19:07:01.060705866Z" level=warning msg="cleaning up after shim disconnected" id=4ca9844bd92b4e190b10feffeab78792976ef37292cce6aed9c1ac496e074ba4 namespace=k8s.io Feb 9 19:07:01.060826 env[1408]: time="2024-02-09T19:07:01.060717267Z" level=info msg="cleaning up dead shim" Feb 9 19:07:01.069015 env[1408]: time="2024-02-09T19:07:01.068983592Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2597 runtime=io.containerd.runc.v2\n" Feb 9 19:07:01.129241 env[1408]: time="2024-02-09T19:07:01.129181203Z" level=error msg="Failed to destroy network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:01.133434 env[1408]: time="2024-02-09T19:07:01.131684941Z" level=error msg="encountered an error cleaning up failed sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:01.133434 env[1408]: time="2024-02-09T19:07:01.131761842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjz4d,Uid:80a02ef3-a462-4213-84a3-0d0df5da60f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:01.133622 kubelet[2026]: E0209 19:07:01.133001 2026 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:01.133622 kubelet[2026]: E0209 19:07:01.133064 2026 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:07:01.133622 kubelet[2026]: E0209 19:07:01.133087 2026 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjz4d" Feb 9 19:07:01.132248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3-shm.mount: Deactivated successfully. Feb 9 19:07:01.133949 kubelet[2026]: E0209 19:07:01.133151 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kjz4d_calico-system(80a02ef3-a462-4213-84a3-0d0df5da60f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kjz4d_calico-system(80a02ef3-a462-4213-84a3-0d0df5da60f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:07:01.621886 kubelet[2026]: E0209 19:07:01.621829 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:01.981045 env[1408]: time="2024-02-09T19:07:01.980743989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:07:01.981223 kubelet[2026]: I0209 19:07:01.980866 2026 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:01.981803 env[1408]: time="2024-02-09T19:07:01.981764804Z" level=info msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" Feb 9 19:07:02.006879 env[1408]: time="2024-02-09T19:07:02.006818281Z" level=error msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" failed" error="failed to destroy network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:02.007071 kubelet[2026]: E0209 19:07:02.007055 2026 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:02.007149 kubelet[2026]: E0209 19:07:02.007111 2026 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3} Feb 9 19:07:02.007204 kubelet[2026]: E0209 19:07:02.007158 2026 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80a02ef3-a462-4213-84a3-0d0df5da60f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:07:02.007204 kubelet[2026]: E0209 19:07:02.007199 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80a02ef3-a462-4213-84a3-0d0df5da60f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kjz4d" podUID=80a02ef3-a462-4213-84a3-0d0df5da60f3 Feb 9 19:07:02.622227 kubelet[2026]: E0209 19:07:02.622167 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:03.622989 kubelet[2026]: E0209 19:07:03.622948 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:04.623844 kubelet[2026]: E0209 19:07:04.623795 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:05.624855 kubelet[2026]: E0209 19:07:05.624804 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:06.315103 kubelet[2026]: I0209 19:07:06.315051 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:06.444172 kubelet[2026]: I0209 19:07:06.444122 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsw4b\" (UniqueName: \"kubernetes.io/projected/f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225-kube-api-access-rsw4b\") pod \"nginx-deployment-8ffc5cf85-4dsbv\" (UID: \"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225\") " pod="default/nginx-deployment-8ffc5cf85-4dsbv" Feb 9 19:07:06.619298 env[1408]: time="2024-02-09T19:07:06.618966452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-4dsbv,Uid:f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225,Namespace:default,Attempt:0,}" Feb 9 19:07:06.625807 kubelet[2026]: E0209 19:07:06.625781 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:07.140080 env[1408]: time="2024-02-09T19:07:07.140013234Z" level=error msg="Failed to destroy network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:07.145358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace-shm.mount: Deactivated successfully. Feb 9 19:07:07.147186 env[1408]: time="2024-02-09T19:07:07.147123828Z" level=error msg="encountered an error cleaning up failed sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:07.147416 env[1408]: time="2024-02-09T19:07:07.147348731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-4dsbv,Uid:f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:07.148337 kubelet[2026]: E0209 19:07:07.147771 2026 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:07.148337 kubelet[2026]: E0209 19:07:07.147837 2026 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-4dsbv" Feb 9 19:07:07.148337 kubelet[2026]: E0209 19:07:07.147892 2026 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-4dsbv" Feb 9 19:07:07.148650 kubelet[2026]: E0209 19:07:07.147959 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-4dsbv_default(f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-4dsbv_default(f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-4dsbv" podUID=f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225 Feb 9 19:07:07.626994 kubelet[2026]: E0209 19:07:07.626877 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:07.991906 kubelet[2026]: I0209 19:07:07.990810 2026 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:07.992276 env[1408]: time="2024-02-09T19:07:07.992233567Z" level=info msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" Feb 9 19:07:08.048860 env[1408]: time="2024-02-09T19:07:08.048793399Z" level=error msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" failed" error="failed to destroy network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:07:08.049544 kubelet[2026]: E0209 19:07:08.049337 2026 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:08.049544 kubelet[2026]: E0209 19:07:08.049398 2026 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace} Feb 9 19:07:08.049544 kubelet[2026]: E0209 19:07:08.049447 2026 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:07:08.049544 kubelet[2026]: E0209 19:07:08.049511 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-4dsbv" podUID=f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225 Feb 9 19:07:08.627655 kubelet[2026]: E0209 19:07:08.627606 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:09.598670 kubelet[2026]: E0209 19:07:09.598599 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:09.627883 kubelet[2026]: E0209 19:07:09.627825 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:10.233245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360316320.mount: Deactivated successfully. Feb 9 19:07:10.355306 env[1408]: time="2024-02-09T19:07:10.355237052Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:10.364341 env[1408]: time="2024-02-09T19:07:10.364299164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:10.368960 env[1408]: time="2024-02-09T19:07:10.368921621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:10.372751 env[1408]: time="2024-02-09T19:07:10.372720568Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:10.373137 env[1408]: time="2024-02-09T19:07:10.373108073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:07:10.389012 env[1408]: time="2024-02-09T19:07:10.388982669Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:07:10.430531 env[1408]: time="2024-02-09T19:07:10.430478281Z" level=info msg="CreateContainer within sandbox \"8dcb7b3bd44108e87f5759f428d697f3cd580c2ce9558534cf769959ba649dde\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3\"" Feb 9 19:07:10.431298 env[1408]: time="2024-02-09T19:07:10.431264290Z" level=info msg="StartContainer for \"f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3\"" Feb 9 19:07:10.491034 env[1408]: time="2024-02-09T19:07:10.488181292Z" level=info msg="StartContainer for \"f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3\" returns successfully" Feb 9 19:07:10.628055 kubelet[2026]: E0209 19:07:10.627983 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:10.708978 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:07:10.709125 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:07:11.013914 kubelet[2026]: I0209 19:07:11.013507 2026 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p555g" podStartSLOduration=-9.223372007841309e+09 pod.CreationTimestamp="2024-02-09 19:06:42 +0000 UTC" firstStartedPulling="2024-02-09 19:06:45.825990572 +0000 UTC m=+16.696259820" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:11.013229367 +0000 UTC m=+41.883498615" watchObservedRunningTime="2024-02-09 19:07:11.013467569 +0000 UTC m=+41.883736817" Feb 9 19:07:11.628652 kubelet[2026]: E0209 19:07:11.628590 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:12.013058 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 9 19:07:12.013216 kernel: audit: type=1400 audit(1707505631.992:245): avc: denied { write } for pid=2845 comm="tee" name="fd" dev="proc" ino=23507 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:11.992000 audit[2845]: AVC avc: denied { write } for pid=2845 comm="tee" name="fd" dev="proc" ino=23507 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:11.992000 audit[2845]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb84e096e a2=241 a3=1b6 items=1 ppid=2813 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.041396 kernel: audit: type=1300 audit(1707505631.992:245): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb84e096e a2=241 a3=1b6 items=1 ppid=2813 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.053069 systemd[1]: run-containerd-runc-k8s.io-f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3-runc.q6Nz27.mount: Deactivated successfully. Feb 9 19:07:11.992000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:07:12.077476 kernel: audit: type=1307 audit(1707505631.992:245): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:07:11.992000 audit: PATH item=0 name="/dev/fd/63" inode=23504 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.108035 kernel: audit: type=1302 audit(1707505631.992:245): item=0 name="/dev/fd/63" inode=23504 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.108147 kernel: audit: type=1327 audit(1707505631.992:245): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:11.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.045000 audit[2842]: AVC avc: denied { write } for pid=2842 comm="tee" name="fd" dev="proc" ino=23514 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.143862 kernel: audit: type=1400 audit(1707505632.045:246): avc: denied { write } for pid=2842 comm="tee" name="fd" dev="proc" ino=23514 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.143991 kernel: audit: type=1300 audit(1707505632.045:246): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff88297980 a2=241 a3=1b6 items=1 ppid=2815 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.045000 audit[2842]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff88297980 a2=241 a3=1b6 items=1 ppid=2815 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.045000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:07:12.165277 kernel: audit: type=1307 audit(1707505632.045:246): cwd="/etc/service/enabled/cni/log" Feb 9 19:07:12.165465 kernel: audit: type=1302 audit(1707505632.045:246): item=0 name="/dev/fd/63" inode=23501 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.045000 audit: PATH item=0 name="/dev/fd/63" inode=23501 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.052000 audit[2851]: AVC avc: denied { write } for pid=2851 comm="tee" name="fd" dev="proc" ino=23664 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.052000 audit[2851]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1025896f a2=241 a3=1b6 items=1 ppid=2811 pid=2851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.052000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:07:12.052000 audit: PATH item=0 name="/dev/fd/63" inode=23619 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.077000 audit[2879]: AVC avc: denied { write } for pid=2879 comm="tee" name="fd" dev="proc" ino=23525 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.180446 kernel: audit: type=1327 audit(1707505632.045:246): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.077000 audit[2879]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffffb4f497e a2=241 a3=1b6 items=1 ppid=2832 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.077000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:07:12.077000 audit: PATH item=0 name="/dev/fd/63" inode=23679 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.094000 audit[2870]: AVC avc: denied { write } for pid=2870 comm="tee" name="fd" dev="proc" ino=23694 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.094000 audit[2870]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1b98597e a2=241 a3=1b6 items=1 ppid=2837 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.094000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:07:12.094000 audit: PATH item=0 name="/dev/fd/63" inode=23668 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.094000 audit[2888]: AVC avc: denied { write } for pid=2888 comm="tee" name="fd" dev="proc" ino=23698 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.094000 audit[2888]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffed15697f a2=241 a3=1b6 items=1 ppid=2838 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.094000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:07:12.094000 audit: PATH item=0 name="/dev/fd/63" inode=23682 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.099000 audit[2872]: AVC avc: denied { write } for pid=2872 comm="tee" name="fd" dev="proc" ino=23703 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:07:12.099000 audit[2872]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce973e97e a2=241 a3=1b6 items=1 ppid=2830 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.099000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:07:12.099000 audit: PATH item=0 name="/dev/fd/63" inode=23671 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:07:12.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:07:12.374459 kernel: Initializing XFRM netlink socket Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.507000 audit: BPF prog-id=10 op=LOAD Feb 9 19:07:12.507000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc3580680 a2=70 a3=7fd3c1964000 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.507000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.508000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.508000 audit: BPF prog-id=11 op=LOAD Feb 9 19:07:12.508000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc3580680 a2=70 a3=6e items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.508000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.508000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcc3580630 a2=70 a3=7ffcc3580680 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit: BPF prog-id=12 op=LOAD Feb 9 19:07:12.509000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc3580610 a2=70 a3=7ffcc3580680 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.509000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc35806f0 a2=70 a3=0 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc35806e0 a2=70 a3=0 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.509000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.509000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffcc3580720 a2=70 a3=0 items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { perfmon } for pid=2971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit[2971]: AVC avc: denied { bpf } for pid=2971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.511000 audit: BPF prog-id=13 op=LOAD Feb 9 19:07:12.511000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc3580640 a2=70 a3=ffffffff items=0 ppid=2831 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:07:12.516000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.516000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe8f1ebcc0 a2=70 a3=fff80800 items=0 ppid=2831 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:07:12.516000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:07:12.516000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe8f1ebb90 a2=70 a3=3 items=0 ppid=2831 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:07:12.522000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:07:12.629670 kubelet[2026]: E0209 19:07:12.629567 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:12.668000 audit[2998]: NETFILTER_CFG table=mangle:85 family=2 entries=19 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:12.668000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fff21e8b810 a2=0 a3=7fff21e8b7fc items=0 ppid=2831 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:12.674000 audit[2997]: NETFILTER_CFG table=nat:86 family=2 entries=16 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:12.674000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fffc729e410 a2=0 a3=558f63867000 items=0 ppid=2831 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:12.674000 audit[2999]: NETFILTER_CFG table=filter:87 family=2 entries=39 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:12.674000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffeb7939590 a2=0 a3=55a25dc14000 items=0 ppid=2831 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:12.690000 audit[2996]: NETFILTER_CFG table=raw:88 family=2 entries=19 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:12.690000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffe08c46e60 a2=0 a3=560d221e7000 items=0 ppid=2831 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:12.690000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:13.434840 systemd-networkd[1556]: vxlan.calico: Link UP Feb 9 19:07:13.434850 systemd-networkd[1556]: vxlan.calico: Gained carrier Feb 9 19:07:13.630049 kubelet[2026]: E0209 19:07:13.629986 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:13.909776 env[1408]: time="2024-02-09T19:07:13.909714021Z" level=info msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] k8s.go 578: Cleaning up netns ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" iface="eth0" netns="/var/run/netns/cni-02d4b97e-b5fe-3711-4ddb-253f8bac47a3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" iface="eth0" netns="/var/run/netns/cni-02d4b97e-b5fe-3711-4ddb-253f8bac47a3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" iface="eth0" netns="/var/run/netns/cni-02d4b97e-b5fe-3711-4ddb-253f8bac47a3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] k8s.go 585: Releasing IP address(es) ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.951 [INFO][3026] utils.go 188: Calico CNI releasing IP address ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.971 [INFO][3032] ipam_plugin.go 415: Releasing address using handleID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.971 [INFO][3032] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.971 [INFO][3032] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.979 [WARNING][3032] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.979 [INFO][3032] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.980 [INFO][3032] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:13.982907 env[1408]: 2024-02-09 19:07:13.981 [INFO][3026] k8s.go 591: Teardown processing complete. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:13.985564 systemd[1]: run-netns-cni\x2d02d4b97e\x2db5fe\x2d3711\x2d4ddb\x2d253f8bac47a3.mount: Deactivated successfully. Feb 9 19:07:13.986789 env[1408]: time="2024-02-09T19:07:13.986730312Z" level=info msg="TearDown network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" successfully" Feb 9 19:07:13.986883 env[1408]: time="2024-02-09T19:07:13.986788512Z" level=info msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" returns successfully" Feb 9 19:07:13.987625 env[1408]: time="2024-02-09T19:07:13.987592122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjz4d,Uid:80a02ef3-a462-4213-84a3-0d0df5da60f3,Namespace:calico-system,Attempt:1,}" Feb 9 19:07:14.129304 systemd-networkd[1556]: calicc38ce30516: Link UP Feb 9 19:07:14.141029 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:07:14.141125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicc38ce30516: link becomes ready Feb 9 19:07:14.141555 systemd-networkd[1556]: calicc38ce30516: Gained carrier Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.063 [INFO][3041] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.48-k8s-csi--node--driver--kjz4d-eth0 csi-node-driver- calico-system 80a02ef3-a462-4213-84a3-0d0df5da60f3 1317 0 2024-02-09 19:06:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.48 csi-node-driver-kjz4d eth0 default [] [] [kns.calico-system ksa.calico-system.default] calicc38ce30516 [] []}} ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.063 [INFO][3041] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.087 [INFO][3053] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" HandleID="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.099 [INFO][3053] ipam_plugin.go 268: Auto assigning IP ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" HandleID="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027dac0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.48", "pod":"csi-node-driver-kjz4d", "timestamp":"2024-02-09 19:07:14.087176153 +0000 UTC"}, Hostname:"10.200.8.48", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.100 [INFO][3053] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.100 [INFO][3053] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.100 [INFO][3053] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.48' Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.101 [INFO][3053] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.105 [INFO][3053] ipam.go 372: Looking up existing affinities for host host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.109 [INFO][3053] ipam.go 489: Trying affinity for 192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.111 [INFO][3053] ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.113 [INFO][3053] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.113 [INFO][3053] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.114 [INFO][3053] ipam.go 1682: Creating new handle: k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.119 [INFO][3053] ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.124 [INFO][3053] ipam.go 1216: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.124 [INFO][3053] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" host="10.200.8.48" Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.124 [INFO][3053] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:14.158411 env[1408]: 2024-02-09 19:07:14.124 [INFO][3053] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" HandleID="k8s-pod-network.cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.126 [INFO][3041] k8s.go 385: Populated endpoint ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-csi--node--driver--kjz4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80a02ef3-a462-4213-84a3-0d0df5da60f3", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"", Pod:"csi-node-driver-kjz4d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicc38ce30516", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.126 [INFO][3041] k8s.go 386: Calico CNI using IPs: [192.168.35.1/32] ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.127 [INFO][3041] dataplane_linux.go 68: Setting the host side veth name to calicc38ce30516 ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.142 [INFO][3041] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.143 [INFO][3041] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-csi--node--driver--kjz4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80a02ef3-a462-4213-84a3-0d0df5da60f3", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b", Pod:"csi-node-driver-kjz4d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicc38ce30516", MAC:"66:2b:bb:a1:03:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:14.159331 env[1408]: 2024-02-09 19:07:14.150 [INFO][3041] k8s.go 491: Wrote updated endpoint to datastore ContainerID="cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b" Namespace="calico-system" Pod="csi-node-driver-kjz4d" WorkloadEndpoint="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:14.177000 audit[3076]: NETFILTER_CFG table=filter:89 family=2 entries=36 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:14.177000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff99496920 a2=0 a3=7fff9949690c items=0 ppid=2831 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:14.177000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:14.183146 env[1408]: time="2024-02-09T19:07:14.183081740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:14.183277 env[1408]: time="2024-02-09T19:07:14.183148840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:14.183277 env[1408]: time="2024-02-09T19:07:14.183178341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:14.183468 env[1408]: time="2024-02-09T19:07:14.183326242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b pid=3084 runtime=io.containerd.runc.v2 Feb 9 19:07:14.229484 env[1408]: time="2024-02-09T19:07:14.229442765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjz4d,Uid:80a02ef3-a462-4213-84a3-0d0df5da60f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b\"" Feb 9 19:07:14.231748 env[1408]: time="2024-02-09T19:07:14.231715291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:07:14.630577 kubelet[2026]: E0209 19:07:14.630524 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:14.762693 systemd-networkd[1556]: vxlan.calico: Gained IPv6LL Feb 9 19:07:14.985846 systemd[1]: run-containerd-runc-k8s.io-cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b-runc.6qCH5y.mount: Deactivated successfully. Feb 9 19:07:15.274729 systemd-networkd[1556]: calicc38ce30516: Gained IPv6LL Feb 9 19:07:15.631567 kubelet[2026]: E0209 19:07:15.631515 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:16.079669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361215388.mount: Deactivated successfully. Feb 9 19:07:16.543503 env[1408]: time="2024-02-09T19:07:16.543326597Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:16.550255 env[1408]: time="2024-02-09T19:07:16.550215072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:16.554921 env[1408]: time="2024-02-09T19:07:16.554885123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:16.560073 env[1408]: time="2024-02-09T19:07:16.560040979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:16.560665 env[1408]: time="2024-02-09T19:07:16.560632085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:07:16.570094 env[1408]: time="2024-02-09T19:07:16.570060288Z" level=info msg="CreateContainer within sandbox \"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:07:16.597527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746254571.mount: Deactivated successfully. Feb 9 19:07:16.610808 env[1408]: time="2024-02-09T19:07:16.610759430Z" level=info msg="CreateContainer within sandbox \"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3d0334d021366d0f7699f76d6751d2a568ba3714a4bc2bd1768bc6d8858cafaa\"" Feb 9 19:07:16.611528 env[1408]: time="2024-02-09T19:07:16.611358737Z" level=info msg="StartContainer for \"3d0334d021366d0f7699f76d6751d2a568ba3714a4bc2bd1768bc6d8858cafaa\"" Feb 9 19:07:16.636472 kubelet[2026]: E0209 19:07:16.636170 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:16.673717 env[1408]: time="2024-02-09T19:07:16.673677514Z" level=info msg="StartContainer for \"3d0334d021366d0f7699f76d6751d2a568ba3714a4bc2bd1768bc6d8858cafaa\" returns successfully" Feb 9 19:07:16.675308 env[1408]: time="2024-02-09T19:07:16.675281931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:07:17.636425 kubelet[2026]: E0209 19:07:17.636353 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:18.636984 kubelet[2026]: E0209 19:07:18.636927 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:18.667392 env[1408]: time="2024-02-09T19:07:18.667333081Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:18.674026 env[1408]: time="2024-02-09T19:07:18.673969050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:18.681474 env[1408]: time="2024-02-09T19:07:18.681439328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:18.687267 env[1408]: time="2024-02-09T19:07:18.687227489Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:18.687725 env[1408]: time="2024-02-09T19:07:18.687688694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:07:18.689959 env[1408]: time="2024-02-09T19:07:18.689925317Z" level=info msg="CreateContainer within sandbox \"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:07:18.731446 env[1408]: time="2024-02-09T19:07:18.731394950Z" level=info msg="CreateContainer within sandbox \"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2bdc9e2c9bf6e1fae0e500bde128334b2ec856ce555aa906aa926dae274966f4\"" Feb 9 19:07:18.732193 env[1408]: time="2024-02-09T19:07:18.732116258Z" level=info msg="StartContainer for \"2bdc9e2c9bf6e1fae0e500bde128334b2ec856ce555aa906aa926dae274966f4\"" Feb 9 19:07:18.762284 systemd[1]: run-containerd-runc-k8s.io-2bdc9e2c9bf6e1fae0e500bde128334b2ec856ce555aa906aa926dae274966f4-runc.nGaCF0.mount: Deactivated successfully. Feb 9 19:07:18.797555 env[1408]: time="2024-02-09T19:07:18.796765733Z" level=info msg="StartContainer for \"2bdc9e2c9bf6e1fae0e500bde128334b2ec856ce555aa906aa926dae274966f4\" returns successfully" Feb 9 19:07:19.029634 kubelet[2026]: I0209 19:07:19.029515 2026 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kjz4d" podStartSLOduration=-9.223371999825294e+09 pod.CreationTimestamp="2024-02-09 19:06:42 +0000 UTC" firstStartedPulling="2024-02-09 19:07:14.230871081 +0000 UTC m=+45.101140329" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:19.029335855 +0000 UTC m=+49.899605103" watchObservedRunningTime="2024-02-09 19:07:19.029480856 +0000 UTC m=+49.899750204" Feb 9 19:07:19.638044 kubelet[2026]: E0209 19:07:19.637978 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:19.721481 kubelet[2026]: I0209 19:07:19.721449 2026 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:07:19.721673 kubelet[2026]: I0209 19:07:19.721497 2026 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:07:19.910700 env[1408]: time="2024-02-09T19:07:19.910146370Z" level=info msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.957 [INFO][3210] k8s.go 578: Cleaning up netns ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.958 [INFO][3210] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" iface="eth0" netns="/var/run/netns/cni-0f54f6fc-58c2-32db-0892-f849e81a49c4" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.958 [INFO][3210] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" iface="eth0" netns="/var/run/netns/cni-0f54f6fc-58c2-32db-0892-f849e81a49c4" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.958 [INFO][3210] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" iface="eth0" netns="/var/run/netns/cni-0f54f6fc-58c2-32db-0892-f849e81a49c4" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.958 [INFO][3210] k8s.go 585: Releasing IP address(es) ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.958 [INFO][3210] utils.go 188: Calico CNI releasing IP address ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.976 [INFO][3216] ipam_plugin.go 415: Releasing address using handleID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.977 [INFO][3216] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.977 [INFO][3216] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.986 [WARNING][3216] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.986 [INFO][3216] ipam_plugin.go 443: Releasing address using workloadID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.988 [INFO][3216] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:19.990198 env[1408]: 2024-02-09 19:07:19.989 [INFO][3210] k8s.go 591: Teardown processing complete. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:19.992588 env[1408]: time="2024-02-09T19:07:19.990438192Z" level=info msg="TearDown network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" successfully" Feb 9 19:07:19.992588 env[1408]: time="2024-02-09T19:07:19.990519393Z" level=info msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" returns successfully" Feb 9 19:07:19.992588 env[1408]: time="2024-02-09T19:07:19.991389702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-4dsbv,Uid:f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225,Namespace:default,Attempt:1,}" Feb 9 19:07:19.993098 systemd[1]: run-netns-cni\x2d0f54f6fc\x2d58c2\x2d32db\x2d0892\x2df849e81a49c4.mount: Deactivated successfully. Feb 9 19:07:20.165521 systemd-networkd[1556]: calic08c9aff0fe: Link UP Feb 9 19:07:20.176926 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:07:20.177021 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic08c9aff0fe: link becomes ready Feb 9 19:07:20.177320 systemd-networkd[1556]: calic08c9aff0fe: Gained carrier Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.065 [INFO][3223] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0 nginx-deployment-8ffc5cf85- default f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225 1346 0 2024-02-09 19:07:06 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.48 nginx-deployment-8ffc5cf85-4dsbv eth0 default [] [] [kns.default ksa.default.default] calic08c9aff0fe [] []}} ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.065 [INFO][3223] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.091 [INFO][3234] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" HandleID="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.104 [INFO][3234] ipam_plugin.go 268: Auto assigning IP ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" HandleID="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291590), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.48", "pod":"nginx-deployment-8ffc5cf85-4dsbv", "timestamp":"2024-02-09 19:07:20.091317807 +0000 UTC"}, Hostname:"10.200.8.48", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.104 [INFO][3234] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.104 [INFO][3234] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.104 [INFO][3234] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.48' Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.106 [INFO][3234] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.109 [INFO][3234] ipam.go 372: Looking up existing affinities for host host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.112 [INFO][3234] ipam.go 489: Trying affinity for 192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.114 [INFO][3234] ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.116 [INFO][3234] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.116 [INFO][3234] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.138 [INFO][3234] ipam.go 1682: Creating new handle: k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96 Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.154 [INFO][3234] ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.161 [INFO][3234] ipam.go 1216: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.161 [INFO][3234] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" host="10.200.8.48" Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.161 [INFO][3234] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:20.192013 env[1408]: 2024-02-09 19:07:20.161 [INFO][3234] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" HandleID="k8s-pod-network.8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.162 [INFO][3223] k8s.go 385: Populated endpoint ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-4dsbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic08c9aff0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.162 [INFO][3223] k8s.go 386: Calico CNI using IPs: [192.168.35.2/32] ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.162 [INFO][3223] dataplane_linux.go 68: Setting the host side veth name to calic08c9aff0fe ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.177 [INFO][3223] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.178 [INFO][3223] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96", Pod:"nginx-deployment-8ffc5cf85-4dsbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic08c9aff0fe", MAC:"b2:89:fd:45:43:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:20.201588 env[1408]: 2024-02-09 19:07:20.183 [INFO][3223] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96" Namespace="default" Pod="nginx-deployment-8ffc5cf85-4dsbv" WorkloadEndpoint="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:20.210000 audit[3254]: NETFILTER_CFG table=filter:90 family=2 entries=40 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:20.216457 kernel: kauditd_printk_skb: 111 callbacks suppressed Feb 9 19:07:20.216529 kernel: audit: type=1325 audit(1707505640.210:271): table=filter:90 family=2 entries=40 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:20.210000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7ffc58374d70 a2=0 a3=7ffc58374d5c items=0 ppid=2831 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:20.242707 env[1408]: time="2024-02-09T19:07:20.242650926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:20.242839 env[1408]: time="2024-02-09T19:07:20.242820027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:20.242909 env[1408]: time="2024-02-09T19:07:20.242894328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:20.243102 env[1408]: time="2024-02-09T19:07:20.243079030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96 pid=3264 runtime=io.containerd.runc.v2 Feb 9 19:07:20.249389 kernel: audit: type=1300 audit(1707505640.210:271): arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7ffc58374d70 a2=0 a3=7ffc58374d5c items=0 ppid=2831 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:20.210000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:20.269014 kernel: audit: type=1327 audit(1707505640.210:271): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:20.310759 env[1408]: time="2024-02-09T19:07:20.310716809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-4dsbv,Uid:f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225,Namespace:default,Attempt:1,} returns sandbox id \"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96\"" Feb 9 19:07:20.312364 env[1408]: time="2024-02-09T19:07:20.312286925Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:07:20.638701 kubelet[2026]: E0209 19:07:20.638608 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:21.611877 systemd-networkd[1556]: calic08c9aff0fe: Gained IPv6LL Feb 9 19:07:21.639336 kubelet[2026]: E0209 19:07:21.639252 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:22.640006 kubelet[2026]: E0209 19:07:22.639902 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:23.640679 kubelet[2026]: E0209 19:07:23.640633 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:23.877847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043742013.mount: Deactivated successfully. Feb 9 19:07:24.641464 kubelet[2026]: E0209 19:07:24.641367 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:24.815907 env[1408]: time="2024-02-09T19:07:24.815848500Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:24.823620 env[1408]: time="2024-02-09T19:07:24.823518771Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:24.828118 env[1408]: time="2024-02-09T19:07:24.828027413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:24.833744 env[1408]: time="2024-02-09T19:07:24.833712866Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:24.834415 env[1408]: time="2024-02-09T19:07:24.834361872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:07:24.836607 env[1408]: time="2024-02-09T19:07:24.836577893Z" level=info msg="CreateContainer within sandbox \"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:07:24.860917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143683846.mount: Deactivated successfully. Feb 9 19:07:24.872110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255009801.mount: Deactivated successfully. Feb 9 19:07:24.887635 env[1408]: time="2024-02-09T19:07:24.887526667Z" level=info msg="CreateContainer within sandbox \"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3ce9348fbadf1e96cab10f0b6374508b6ad637a0a08be680972bf14d690edb70\"" Feb 9 19:07:24.888737 env[1408]: time="2024-02-09T19:07:24.888707878Z" level=info msg="StartContainer for \"3ce9348fbadf1e96cab10f0b6374508b6ad637a0a08be680972bf14d690edb70\"" Feb 9 19:07:24.920931 systemd[1]: run-containerd-runc-k8s.io-3ce9348fbadf1e96cab10f0b6374508b6ad637a0a08be680972bf14d690edb70-runc.fAbN4Z.mount: Deactivated successfully. Feb 9 19:07:24.956444 env[1408]: time="2024-02-09T19:07:24.956402507Z" level=info msg="StartContainer for \"3ce9348fbadf1e96cab10f0b6374508b6ad637a0a08be680972bf14d690edb70\" returns successfully" Feb 9 19:07:25.169246 kubelet[2026]: I0209 19:07:25.169127 2026 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-4dsbv" podStartSLOduration=-9.223372017685678e+09 pod.CreationTimestamp="2024-02-09 19:07:06 +0000 UTC" firstStartedPulling="2024-02-09 19:07:20.311892521 +0000 UTC m=+51.182161869" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:25.052313193 +0000 UTC m=+55.922582441" watchObservedRunningTime="2024-02-09 19:07:25.169097859 +0000 UTC m=+56.039367507" Feb 9 19:07:25.641730 kubelet[2026]: E0209 19:07:25.641660 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:25.897177 systemd[1]: run-containerd-runc-k8s.io-f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3-runc.9sHbvl.mount: Deactivated successfully. Feb 9 19:07:26.642108 kubelet[2026]: E0209 19:07:26.642046 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:27.642980 kubelet[2026]: E0209 19:07:27.642906 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:28.643705 kubelet[2026]: E0209 19:07:28.643638 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.598878 kubelet[2026]: E0209 19:07:29.598822 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.644732 kubelet[2026]: E0209 19:07:29.644674 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.652406 env[1408]: time="2024-02-09T19:07:29.652351841Z" level=info msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.692 [WARNING][3396] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-csi--node--driver--kjz4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80a02ef3-a462-4213-84a3-0d0df5da60f3", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b", Pod:"csi-node-driver-kjz4d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicc38ce30516", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.692 [INFO][3396] k8s.go 578: Cleaning up netns ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.692 [INFO][3396] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" iface="eth0" netns="" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.692 [INFO][3396] k8s.go 585: Releasing IP address(es) ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.692 [INFO][3396] utils.go 188: Calico CNI releasing IP address ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.713 [INFO][3402] ipam_plugin.go 415: Releasing address using handleID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.713 [INFO][3402] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.713 [INFO][3402] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.722 [WARNING][3402] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.722 [INFO][3402] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.724 [INFO][3402] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:29.726110 env[1408]: 2024-02-09 19:07:29.725 [INFO][3396] k8s.go 591: Teardown processing complete. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.726880 env[1408]: time="2024-02-09T19:07:29.726152669Z" level=info msg="TearDown network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" successfully" Feb 9 19:07:29.726880 env[1408]: time="2024-02-09T19:07:29.726188969Z" level=info msg="StopPodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" returns successfully" Feb 9 19:07:29.726880 env[1408]: time="2024-02-09T19:07:29.726678073Z" level=info msg="RemovePodSandbox for \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" Feb 9 19:07:29.726880 env[1408]: time="2024-02-09T19:07:29.726722073Z" level=info msg="Forcibly stopping sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\"" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.771 [WARNING][3421] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-csi--node--driver--kjz4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80a02ef3-a462-4213-84a3-0d0df5da60f3", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"cb591f658cd0700cf01c309f3c522cb831398cd307f7843e997303cb315e279b", Pod:"csi-node-driver-kjz4d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicc38ce30516", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.771 [INFO][3421] k8s.go 578: Cleaning up netns ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.771 [INFO][3421] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" iface="eth0" netns="" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.771 [INFO][3421] k8s.go 585: Releasing IP address(es) ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.771 [INFO][3421] utils.go 188: Calico CNI releasing IP address ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.791 [INFO][3427] ipam_plugin.go 415: Releasing address using handleID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.791 [INFO][3427] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.791 [INFO][3427] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.800 [WARNING][3427] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.800 [INFO][3427] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" HandleID="k8s-pod-network.06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Workload="10.200.8.48-k8s-csi--node--driver--kjz4d-eth0" Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.802 [INFO][3427] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:29.804614 env[1408]: 2024-02-09 19:07:29.803 [INFO][3421] k8s.go 591: Teardown processing complete. ContainerID="06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3" Feb 9 19:07:29.805291 env[1408]: time="2024-02-09T19:07:29.804699337Z" level=info msg="TearDown network for sandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" successfully" Feb 9 19:07:29.814143 env[1408]: time="2024-02-09T19:07:29.814021316Z" level=info msg="RemovePodSandbox \"06ae594804a6d83fe65e2818405081dda0b6cc9b16f5ad75dcf30ea437035eb3\" returns successfully" Feb 9 19:07:29.815682 env[1408]: time="2024-02-09T19:07:29.815643230Z" level=info msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" Feb 9 19:07:29.891318 kubelet[2026]: I0209 19:07:29.889691 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:29.912000 audit[3478]: NETFILTER_CFG table=filter:91 family=2 entries=18 op=nft_register_rule pid=3478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.930397 kernel: audit: type=1325 audit(1707505649.912:272): table=filter:91 family=2 entries=18 op=nft_register_rule pid=3478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.930547 kernel: audit: type=1300 audit(1707505649.912:272): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe8a2f8af0 a2=0 a3=7ffe8a2f8adc items=0 ppid=2297 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.912000 audit[3478]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe8a2f8af0 a2=0 a3=7ffe8a2f8adc items=0 ppid=2297 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.950815 kernel: audit: type=1327 audit(1707505649.912:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:29.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.867 [WARNING][3446] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96", Pod:"nginx-deployment-8ffc5cf85-4dsbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic08c9aff0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.867 [INFO][3446] k8s.go 578: Cleaning up netns ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.867 [INFO][3446] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" iface="eth0" netns="" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.867 [INFO][3446] k8s.go 585: Releasing IP address(es) ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.868 [INFO][3446] utils.go 188: Calico CNI releasing IP address ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.900 [INFO][3477] ipam_plugin.go 415: Releasing address using handleID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.900 [INFO][3477] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.901 [INFO][3477] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.926 [WARNING][3477] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.926 [INFO][3477] ipam_plugin.go 443: Releasing address using workloadID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.947 [INFO][3477] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:29.952080 env[1408]: 2024-02-09 19:07:29.951 [INFO][3446] k8s.go 591: Teardown processing complete. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:29.952650 env[1408]: time="2024-02-09T19:07:29.952608495Z" level=info msg="TearDown network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" successfully" Feb 9 19:07:29.952715 env[1408]: time="2024-02-09T19:07:29.952700996Z" level=info msg="StopPodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" returns successfully" Feb 9 19:07:29.953231 env[1408]: time="2024-02-09T19:07:29.953211900Z" level=info msg="RemovePodSandbox for \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" Feb 9 19:07:29.953358 env[1408]: time="2024-02-09T19:07:29.953323201Z" level=info msg="Forcibly stopping sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\"" Feb 9 19:07:29.948000 audit[3478]: NETFILTER_CFG table=nat:92 family=2 entries=94 op=nft_register_rule pid=3478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.969829 kernel: audit: type=1325 audit(1707505649.948:273): table=nat:92 family=2 entries=94 op=nft_register_rule pid=3478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.948000 audit[3478]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe8a2f8af0 a2=0 a3=7ffe8a2f8adc items=0 ppid=2297 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.991518 kernel: audit: type=1300 audit(1707505649.948:273): arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe8a2f8af0 a2=0 a3=7ffe8a2f8adc items=0 ppid=2297 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.991608 kubelet[2026]: I0209 19:07:29.991247 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106-data\") pod \"nfs-server-provisioner-0\" (UID: \"6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106\") " pod="default/nfs-server-provisioner-0" Feb 9 19:07:29.991608 kubelet[2026]: I0209 19:07:29.991297 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krd4l\" (UniqueName: \"kubernetes.io/projected/6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106-kube-api-access-krd4l\") pod \"nfs-server-provisioner-0\" (UID: \"6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106\") " pod="default/nfs-server-provisioner-0" Feb 9 19:07:29.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.003443 kernel: audit: type=1327 audit(1707505649.948:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.034000 audit[3540]: NETFILTER_CFG table=filter:93 family=2 entries=30 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.049715 kernel: audit: type=1325 audit(1707505650.034:274): table=filter:93 family=2 entries=30 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.034000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffff39c3160 a2=0 a3=7ffff39c314c items=0 ppid=2297 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.087076 kernel: audit: type=1300 audit(1707505650.034:274): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffff39c3160 a2=0 a3=7ffff39c314c items=0 ppid=2297 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.087187 kernel: audit: type=1327 audit(1707505650.034:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.036000 audit[3540]: NETFILTER_CFG table=nat:94 family=2 entries=94 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.097402 kernel: audit: type=1325 audit(1707505650.036:275): table=nat:94 family=2 entries=94 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.036000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffff39c3160 a2=0 a3=7ffff39c314c items=0 ppid=2297 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.019 [WARNING][3504] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f8b2f488-2bd6-4226-8cbc-1ad4c3a6c225", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"8709de379ba1a3e06f7d57af668fb2ae23b00a9f30319022040c5491f730ac96", Pod:"nginx-deployment-8ffc5cf85-4dsbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic08c9aff0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.020 [INFO][3504] k8s.go 578: Cleaning up netns ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.020 [INFO][3504] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" iface="eth0" netns="" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.020 [INFO][3504] k8s.go 585: Releasing IP address(es) ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.020 [INFO][3504] utils.go 188: Calico CNI releasing IP address ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.061 [INFO][3532] ipam_plugin.go 415: Releasing address using handleID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.061 [INFO][3532] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.062 [INFO][3532] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.088 [WARNING][3532] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.088 [INFO][3532] ipam_plugin.go 443: Releasing address using workloadID ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" HandleID="k8s-pod-network.80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Workload="10.200.8.48-k8s-nginx--deployment--8ffc5cf85--4dsbv-eth0" Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.090 [INFO][3532] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:30.098564 env[1408]: 2024-02-09 19:07:30.097 [INFO][3504] k8s.go 591: Teardown processing complete. ContainerID="80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace" Feb 9 19:07:30.099224 env[1408]: time="2024-02-09T19:07:30.098609823Z" level=info msg="TearDown network for sandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" successfully" Feb 9 19:07:30.112642 env[1408]: time="2024-02-09T19:07:30.112608141Z" level=info msg="RemovePodSandbox \"80bb19628ce489f775f00ecadde47cf8049797cd8bc3812a9c53491f88335ace\" returns successfully" Feb 9 19:07:30.193240 env[1408]: time="2024-02-09T19:07:30.193115514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106,Namespace:default,Attempt:0,}" Feb 9 19:07:30.361222 systemd-networkd[1556]: cali60e51b789ff: Link UP Feb 9 19:07:30.373144 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:07:30.373241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 19:07:30.374271 systemd-networkd[1556]: cali60e51b789ff: Gained carrier Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.284 [INFO][3548] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.48-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106 1395 0 2024-02-09 19:07:29 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.48 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.284 [INFO][3548] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.309 [INFO][3559] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" HandleID="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Workload="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.324 [INFO][3559] ipam_plugin.go 268: Auto assigning IP ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" HandleID="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Workload="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d950), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.48", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 19:07:30.309339486 +0000 UTC"}, Hostname:"10.200.8.48", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.324 [INFO][3559] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.324 [INFO][3559] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.324 [INFO][3559] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.48' Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.326 [INFO][3559] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.333 [INFO][3559] ipam.go 372: Looking up existing affinities for host host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.337 [INFO][3559] ipam.go 489: Trying affinity for 192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.339 [INFO][3559] ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.342 [INFO][3559] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.342 [INFO][3559] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.344 [INFO][3559] ipam.go 1682: Creating new handle: k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884 Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.348 [INFO][3559] ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.356 [INFO][3559] ipam.go 1216: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.356 [INFO][3559] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" host="10.200.8.48" Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.356 [INFO][3559] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:07:30.384464 env[1408]: 2024-02-09 19:07:30.356 [INFO][3559] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" HandleID="k8s-pod-network.856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Workload="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.385430 env[1408]: 2024-02-09 19:07:30.358 [INFO][3548] k8s.go 385: Populated endpoint ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:30.385430 env[1408]: 2024-02-09 19:07:30.358 [INFO][3548] k8s.go 386: Calico CNI using IPs: [192.168.35.3/32] ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.385430 env[1408]: 2024-02-09 19:07:30.358 [INFO][3548] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.385430 env[1408]: 2024-02-09 19:07:30.375 [INFO][3548] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.385826 env[1408]: 2024-02-09 19:07:30.376 [INFO][3548] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6a:e5:cc:ca:db:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:07:30.385826 env[1408]: 2024-02-09 19:07:30.383 [INFO][3548] k8s.go 491: Wrote updated endpoint to datastore ContainerID="856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.48-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:07:30.407687 env[1408]: time="2024-02-09T19:07:30.407622008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:30.407687 env[1408]: time="2024-02-09T19:07:30.407657908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:30.407884 env[1408]: time="2024-02-09T19:07:30.407673908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:30.408132 env[1408]: time="2024-02-09T19:07:30.408089212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884 pid=3585 runtime=io.containerd.runc.v2 Feb 9 19:07:30.416000 audit[3601]: NETFILTER_CFG table=filter:95 family=2 entries=44 op=nft_register_chain pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:07:30.416000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=22352 a0=3 a1=7fffff9f2800 a2=0 a3=7fffff9f27ec items=0 ppid=2831 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.416000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:07:30.477916 env[1408]: time="2024-02-09T19:07:30.477865695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6b9c5fdf-e1b8-44f2-bf1d-dd5484be4106,Namespace:default,Attempt:0,} returns sandbox id \"856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884\"" Feb 9 19:07:30.479488 env[1408]: time="2024-02-09T19:07:30.479451709Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:07:30.644844 kubelet[2026]: E0209 19:07:30.644788 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:31.645245 kubelet[2026]: E0209 19:07:31.645197 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:32.234635 systemd-networkd[1556]: cali60e51b789ff: Gained IPv6LL Feb 9 19:07:32.646420 kubelet[2026]: E0209 19:07:32.646337 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:33.646828 kubelet[2026]: E0209 19:07:33.646786 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:33.773647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031567387.mount: Deactivated successfully. Feb 9 19:07:34.647983 kubelet[2026]: E0209 19:07:34.647917 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:35.649073 kubelet[2026]: E0209 19:07:35.649018 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:36.107406 env[1408]: time="2024-02-09T19:07:36.107341279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:36.114206 env[1408]: time="2024-02-09T19:07:36.114167031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:36.118396 env[1408]: time="2024-02-09T19:07:36.118352262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:36.122055 env[1408]: time="2024-02-09T19:07:36.122027490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:36.122666 env[1408]: time="2024-02-09T19:07:36.122634995Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:07:36.125041 env[1408]: time="2024-02-09T19:07:36.125006113Z" level=info msg="CreateContainer within sandbox \"856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:07:36.162052 env[1408]: time="2024-02-09T19:07:36.162008393Z" level=info msg="CreateContainer within sandbox \"856e2be178a5b7ae8127f14273b290ad6af9f986aa275c57046598bb97eac884\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"899a7efc3283cbdc8f7257eb98e69b7aa3bf8921f56cfabd64762e8139623e94\"" Feb 9 19:07:36.162663 env[1408]: time="2024-02-09T19:07:36.162629198Z" level=info msg="StartContainer for \"899a7efc3283cbdc8f7257eb98e69b7aa3bf8921f56cfabd64762e8139623e94\"" Feb 9 19:07:36.224430 env[1408]: time="2024-02-09T19:07:36.223359859Z" level=info msg="StartContainer for \"899a7efc3283cbdc8f7257eb98e69b7aa3bf8921f56cfabd64762e8139623e94\" returns successfully" Feb 9 19:07:36.649539 kubelet[2026]: E0209 19:07:36.649487 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:37.083319 kubelet[2026]: I0209 19:07:37.083288 2026 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372028771517e+09 pod.CreationTimestamp="2024-02-09 19:07:29 +0000 UTC" firstStartedPulling="2024-02-09 19:07:30.479083205 +0000 UTC m=+61.349352453" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:37.083142773 +0000 UTC m=+67.953412121" watchObservedRunningTime="2024-02-09 19:07:37.083259374 +0000 UTC m=+67.953528622" Feb 9 19:07:37.129000 audit[3703]: NETFILTER_CFG table=filter:96 family=2 entries=18 op=nft_register_rule pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.135407 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 19:07:37.135522 kernel: audit: type=1325 audit(1707505657.129:277): table=filter:96 family=2 entries=18 op=nft_register_rule pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.147390 kernel: audit: type=1300 audit(1707505657.129:277): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff0e39ae40 a2=0 a3=7fff0e39ae2c items=0 ppid=2297 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.129000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff0e39ae40 a2=0 a3=7fff0e39ae2c items=0 ppid=2297 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.165693 kernel: audit: type=1327 audit(1707505657.129:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.134000 audit[3703]: NETFILTER_CFG table=nat:97 family=2 entries=178 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.184397 kernel: audit: type=1325 audit(1707505657.134:278): table=nat:97 family=2 entries=178 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.184463 kernel: audit: type=1300 audit(1707505657.134:278): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fff0e39ae40 a2=0 a3=7fff0e39ae2c items=0 ppid=2297 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.134000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fff0e39ae40 a2=0 a3=7fff0e39ae2c items=0 ppid=2297 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.204432 kernel: audit: type=1327 audit(1707505657.134:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.650071 kubelet[2026]: E0209 19:07:37.649994 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:38.650595 kubelet[2026]: E0209 19:07:38.650539 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:39.650825 kubelet[2026]: E0209 19:07:39.650764 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:40.651059 kubelet[2026]: E0209 19:07:40.650988 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:41.651718 kubelet[2026]: E0209 19:07:41.651660 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:42.652808 kubelet[2026]: E0209 19:07:42.652728 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:43.653117 kubelet[2026]: E0209 19:07:43.653066 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:44.654013 kubelet[2026]: E0209 19:07:44.653960 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:45.654522 kubelet[2026]: E0209 19:07:45.654475 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:46.655620 kubelet[2026]: E0209 19:07:46.655556 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:47.656365 kubelet[2026]: E0209 19:07:47.656262 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:48.657512 kubelet[2026]: E0209 19:07:48.657450 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:49.598590 kubelet[2026]: E0209 19:07:49.598496 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:49.658199 kubelet[2026]: E0209 19:07:49.658140 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:50.659319 kubelet[2026]: E0209 19:07:50.659251 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:51.659905 kubelet[2026]: E0209 19:07:51.659849 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:52.660449 kubelet[2026]: E0209 19:07:52.660394 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:53.661140 kubelet[2026]: E0209 19:07:53.661078 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:54.661643 kubelet[2026]: E0209 19:07:54.661547 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:55.115466 systemd[1]: run-containerd-runc-k8s.io-f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3-runc.yPyKTj.mount: Deactivated successfully. Feb 9 19:07:55.662703 kubelet[2026]: E0209 19:07:55.662652 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:56.663055 kubelet[2026]: E0209 19:07:56.662916 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:57.663319 kubelet[2026]: E0209 19:07:57.663194 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:58.663948 kubelet[2026]: E0209 19:07:58.663893 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:59.664473 kubelet[2026]: E0209 19:07:59.664418 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:00.665155 kubelet[2026]: E0209 19:08:00.665098 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:01.666300 kubelet[2026]: E0209 19:08:01.666235 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:01.673210 kubelet[2026]: I0209 19:08:01.673170 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:08:01.871669 kubelet[2026]: I0209 19:08:01.871621 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvzs\" (UniqueName: \"kubernetes.io/projected/f6d78500-de3c-490a-8722-4e03d8d27610-kube-api-access-qnvzs\") pod \"test-pod-1\" (UID: \"f6d78500-de3c-490a-8722-4e03d8d27610\") " pod="default/test-pod-1" Feb 9 19:08:01.871911 kubelet[2026]: I0209 19:08:01.871702 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b2bb57c6-8367-4e51-9baa-a9bb552f3e13\" (UniqueName: \"kubernetes.io/nfs/f6d78500-de3c-490a-8722-4e03d8d27610-pvc-b2bb57c6-8367-4e51-9baa-a9bb552f3e13\") pod \"test-pod-1\" (UID: \"f6d78500-de3c-490a-8722-4e03d8d27610\") " pod="default/test-pod-1" Feb 9 19:08:02.139004 kernel: Failed to create system directory netfs Feb 9 19:08:02.139167 kernel: audit: type=1400 audit(1707505682.115:279): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.139198 kernel: Failed to create system directory netfs Feb 9 19:08:02.115000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.159556 kernel: audit: type=1400 audit(1707505682.115:279): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.159704 kernel: Failed to create system directory netfs Feb 9 19:08:02.115000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.177992 kernel: audit: type=1400 audit(1707505682.115:279): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.178126 kernel: Failed to create system directory netfs Feb 9 19:08:02.115000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.197195 kernel: audit: type=1400 audit(1707505682.115:279): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.115000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.115000 audit[3758]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557fc1e165e0 a1=153bc a2=557fc06bc2b0 a3=5 items=0 ppid=9 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:02.115000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:08:02.222695 kernel: audit: type=1300 audit(1707505682.115:279): arch=c000003e syscall=175 success=yes exit=0 a0=557fc1e165e0 a1=153bc a2=557fc06bc2b0 a3=5 items=0 ppid=9 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:02.222833 kernel: audit: type=1327 audit(1707505682.115:279): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:08:02.273445 kernel: Failed to create system directory fscache Feb 9 19:08:02.273524 kernel: audit: type=1400 audit(1707505682.248:280): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.273551 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.291717 kernel: audit: type=1400 audit(1707505682.248:280): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.291789 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.309898 kernel: audit: type=1400 audit(1707505682.248:280): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.309967 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.329296 kernel: audit: type=1400 audit(1707505682.248:280): avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.329399 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.336249 kernel: Failed to create system directory fscache Feb 9 19:08:02.336315 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.343254 kernel: Failed to create system directory fscache Feb 9 19:08:02.343303 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.349938 kernel: Failed to create system directory fscache Feb 9 19:08:02.349993 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.358263 kernel: Failed to create system directory fscache Feb 9 19:08:02.358321 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.248000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.365140 kernel: Failed to create system directory fscache Feb 9 19:08:02.248000 audit[3758]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557fc202b9c0 a1=4c0fc a2=557fc06bc2b0 a3=5 items=0 ppid=9 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:02.248000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:08:02.369396 kernel: FS-Cache: Loaded Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.463452 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.463532 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.463554 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.470022 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.470112 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.476427 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.476488 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.482864 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.482916 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.489114 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.489171 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.495363 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.495439 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.501813 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.501887 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.508356 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.512392 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.512440 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.518327 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.518395 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.521527 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.531404 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.531466 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.536670 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.536727 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.543089 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.543153 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.548717 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.548773 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.554356 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.554461 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.559743 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.559787 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.565203 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.565249 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.570505 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.570567 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.575961 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.576011 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.581052 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.581100 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.586272 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.586317 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.591706 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.591754 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.597083 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.597133 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.602240 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.602286 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.607688 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.607727 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.612834 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.612879 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.616398 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.620738 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.620788 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.625933 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.628576 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.628626 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.633778 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.633823 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.638977 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.639044 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.644489 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.644551 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.649678 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.649735 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.654419 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.656857 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.656906 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.661336 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.661394 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.666119 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.666163 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.667335 kubelet[2026]: E0209 19:08:02.667301 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.674671 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.674711 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.674745 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.679961 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.680006 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.685255 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.685308 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.690390 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.690429 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.695547 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.695599 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.700893 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.700941 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.706160 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.706204 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.711421 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.711472 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.714700 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.719341 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.719404 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.724486 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.725393 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.729924 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.729973 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.735168 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.735215 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.740420 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.740468 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.745617 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.745669 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.750894 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.750935 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.756084 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.756137 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.761280 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.761327 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.764389 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.769133 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.769179 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.774460 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.774507 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.779948 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.779991 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.785367 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.785432 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.790681 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.790730 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.795909 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.795965 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.801164 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.801213 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.442000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.806290 kernel: Failed to create system directory sunrpc Feb 9 19:08:02.818802 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:08:02.818861 kernel: RPC: Registered udp transport module. Feb 9 19:08:02.818888 kernel: RPC: Registered tcp transport module. Feb 9 19:08:02.821287 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:08:02.442000 audit[3758]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557fc2077ad0 a1=1588c4 a2=557fc06bc2b0 a3=5 items=6 ppid=9 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:02.442000 audit: CWD cwd="/" Feb 9 19:08:02.442000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PATH item=1 name=(null) inode=28209 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PATH item=2 name=(null) inode=28209 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PATH item=3 name=(null) inode=28210 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PATH item=4 name=(null) inode=28209 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PATH item=5 name=(null) inode=28211 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:08:02.442000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.953207 kernel: Failed to create system directory nfs Feb 9 19:08:02.953284 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.956316 kernel: Failed to create system directory nfs Feb 9 19:08:02.956390 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.961666 kernel: Failed to create system directory nfs Feb 9 19:08:02.961722 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.966836 kernel: Failed to create system directory nfs Feb 9 19:08:02.966885 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.974497 kernel: Failed to create system directory nfs Feb 9 19:08:02.974561 kernel: Failed to create system directory nfs Feb 9 19:08:02.974584 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.979620 kernel: Failed to create system directory nfs Feb 9 19:08:02.979676 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.984709 kernel: Failed to create system directory nfs Feb 9 19:08:02.984767 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.987387 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.992445 kernel: Failed to create system directory nfs Feb 9 19:08:02.992494 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.997576 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.000942 kernel: Failed to create system directory nfs Feb 9 19:08:03.000997 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.006257 kernel: Failed to create system directory nfs Feb 9 19:08:03.006313 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.011429 kernel: Failed to create system directory nfs Feb 9 19:08:03.011481 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.016593 kernel: Failed to create system directory nfs Feb 9 19:08:03.016666 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.021931 kernel: Failed to create system directory nfs Feb 9 19:08:03.021991 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.026683 kernel: Failed to create system directory nfs Feb 9 19:08:03.026754 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.031843 kernel: Failed to create system directory nfs Feb 9 19:08:03.031884 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.037075 kernel: Failed to create system directory nfs Feb 9 19:08:03.037113 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.039630 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.042401 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.047529 kernel: Failed to create system directory nfs Feb 9 19:08:03.047639 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.051087 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.055570 kernel: Failed to create system directory nfs Feb 9 19:08:03.055624 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.060735 kernel: Failed to create system directory nfs Feb 9 19:08:03.060785 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.065551 kernel: Failed to create system directory nfs Feb 9 19:08:03.065600 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.070364 kernel: Failed to create system directory nfs Feb 9 19:08:03.070421 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.075160 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.077805 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.080312 kernel: Failed to create system directory nfs Feb 9 19:08:03.080365 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.085442 kernel: Failed to create system directory nfs Feb 9 19:08:03.085492 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.090424 kernel: Failed to create system directory nfs Feb 9 19:08:03.090463 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.095347 kernel: Failed to create system directory nfs Feb 9 19:08:03.095414 kernel: Failed to create system directory nfs Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:02.937000 audit[3758]: AVC avc: denied { confidentiality } for pid=3758 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.100587 kernel: Failed to create system directory nfs Feb 9 19:08:03.115387 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:08:02.937000 audit[3758]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557fc221a680 a1=e29dc a2=557fc06bc2b0 a3=5 items=0 ppid=9 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:02.937000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.197148 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.197218 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.197243 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.203076 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.205795 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.208555 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.211203 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.214101 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.214148 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.219279 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.219322 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.224602 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.224649 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.229733 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.229781 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.234851 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.234888 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.240110 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.240155 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.243402 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.247811 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.247854 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.253001 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.253051 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.258166 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.258210 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.261267 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.266400 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.266446 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.271833 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.271880 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.277100 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.277137 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.282192 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.282240 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.287402 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.287449 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.292729 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.292804 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.296439 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.300901 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.300983 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.303589 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.308734 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.308780 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.311387 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.316482 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.316534 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.321818 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.321863 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.324442 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.329366 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.329419 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.334394 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.334437 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.339427 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.339469 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.342669 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.345190 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.349622 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.349680 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.354720 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.354779 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.359361 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.359412 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.364514 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.364571 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.369531 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.369568 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.374605 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.374650 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.379658 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.379705 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.384797 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.384844 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.389805 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.389853 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.394791 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.394838 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.410437 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.410491 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.415234 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.415315 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.420243 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.420303 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.425283 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.425335 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.430554 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.430614 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.435629 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.435676 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.440798 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.440847 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.175000 audit[3763]: AVC avc: denied { confidentiality } for pid=3763 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.446091 kernel: Failed to create system directory nfs4 Feb 9 19:08:03.590011 kernel: NFS: Registering the id_resolver key type Feb 9 19:08:03.590146 kernel: Key type id_resolver registered Feb 9 19:08:03.590172 kernel: Key type id_legacy registered Feb 9 19:08:03.175000 audit[3763]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fce3e49e010 a1=1d3cc4 a2=560895e5f2b0 a3=5 items=0 ppid=9 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.175000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.625056 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.625114 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.625135 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.630906 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.630954 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.636411 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.641981 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.642037 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.642059 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.644693 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.650137 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.650186 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.655097 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.655138 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.660197 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.660242 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.665592 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.665645 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.667954 kubelet[2026]: E0209 19:08:03.667903 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:03.668404 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.673656 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.673708 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.679115 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.679163 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.684504 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.684551 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.614000 audit[3764]: AVC avc: denied { confidentiality } for pid=3764 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:08:03.689923 kernel: Failed to create system directory rpcgss Feb 9 19:08:03.614000 audit[3764]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f9fd23ac010 a1=4f524 a2=562a7c2a92b0 a3=5 items=0 ppid=9 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.614000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 9 19:08:03.964023 nfsidmap[3770]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-2a68512ec5' Feb 9 19:08:03.986267 nfsidmap[3771]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-2a68512ec5' Feb 9 19:08:03.996000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:08:03.996000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:08:03.996000 audit[1511]: AVC avc: denied { watch_reads } for pid=1511 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:08:03.996000 audit[1511]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55dbd1fec040 a2=10 a3=81f78107f95be7e items=0 ppid=1 pid=1511 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.996000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:08:03.996000 audit[1511]: AVC avc: denied { watch_reads } for pid=1511 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:08:03.996000 audit[1511]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55dbd1fec040 a2=10 a3=81f78107f95be7e items=0 ppid=1 pid=1511 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.996000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:08:04.077806 env[1408]: time="2024-02-09T19:08:04.077741216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f6d78500-de3c-490a-8722-4e03d8d27610,Namespace:default,Attempt:0,}" Feb 9 19:08:04.216910 systemd-networkd[1556]: cali5ec59c6bf6e: Link UP Feb 9 19:08:04.226246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:08:04.226357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 9 19:08:04.228169 systemd-networkd[1556]: cali5ec59c6bf6e: Gained carrier Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.151 [INFO][3772] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.48-k8s-test--pod--1-eth0 default f6d78500-de3c-490a-8722-4e03d8d27610 1495 0 2024-02-09 19:07:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.48 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.151 [INFO][3772] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.177 [INFO][3783] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" HandleID="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Workload="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.187 [INFO][3783] ipam_plugin.go 268: Auto assigning IP ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" HandleID="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Workload="10.200.8.48-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c1f90), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.48", "pod":"test-pod-1", "timestamp":"2024-02-09 19:08:04.177191153 +0000 UTC"}, Hostname:"10.200.8.48", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.187 [INFO][3783] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.188 [INFO][3783] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.188 [INFO][3783] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.48' Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.190 [INFO][3783] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.194 [INFO][3783] ipam.go 372: Looking up existing affinities for host host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.198 [INFO][3783] ipam.go 489: Trying affinity for 192.168.35.0/26 host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.200 [INFO][3783] ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.202 [INFO][3783] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.202 [INFO][3783] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.204 [INFO][3783] ipam.go 1682: Creating new handle: k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.207 [INFO][3783] ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.212 [INFO][3783] ipam.go 1216: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.212 [INFO][3783] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" host="10.200.8.48" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.212 [INFO][3783] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.212 [INFO][3783] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" HandleID="k8s-pod-network.c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Workload="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.213 [INFO][3772] k8s.go 385: Populated endpoint ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f6d78500-de3c-490a-8722-4e03d8d27610", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:08:04.234451 env[1408]: 2024-02-09 19:08:04.214 [INFO][3772] k8s.go 386: Calico CNI using IPs: [192.168.35.4/32] ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.238285 env[1408]: 2024-02-09 19:08:04.214 [INFO][3772] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.238285 env[1408]: 2024-02-09 19:08:04.227 [INFO][3772] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.238285 env[1408]: 2024-02-09 19:08:04.228 [INFO][3772] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f6d78500-de3c-490a-8722-4e03d8d27610", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.48", ContainerID:"c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8e:33:59:52:ae:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:08:04.238285 env[1408]: 2024-02-09 19:08:04.233 [INFO][3772] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.48-k8s-test--pod--1-eth0" Feb 9 19:08:04.253000 audit[3804]: NETFILTER_CFG table=filter:98 family=2 entries=34 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:08:04.253000 audit[3804]: SYSCALL arch=c000003e syscall=46 success=yes exit=17876 a0=3 a1=7ffd386784e0 a2=0 a3=7ffd386784cc items=0 ppid=2831 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:04.253000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:08:04.263085 env[1408]: time="2024-02-09T19:08:04.262936417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:08:04.263085 env[1408]: time="2024-02-09T19:08:04.262970617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:08:04.263085 env[1408]: time="2024-02-09T19:08:04.262980017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:08:04.263282 env[1408]: time="2024-02-09T19:08:04.263119218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d pid=3812 runtime=io.containerd.runc.v2 Feb 9 19:08:04.326299 env[1408]: time="2024-02-09T19:08:04.326251959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f6d78500-de3c-490a-8722-4e03d8d27610,Namespace:default,Attempt:0,} returns sandbox id \"c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d\"" Feb 9 19:08:04.328112 env[1408]: time="2024-02-09T19:08:04.328071869Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:08:04.668635 kubelet[2026]: E0209 19:08:04.668573 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:04.931184 env[1408]: time="2024-02-09T19:08:04.930711127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:08:04.938224 env[1408]: time="2024-02-09T19:08:04.938175267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:08:04.942163 env[1408]: time="2024-02-09T19:08:04.942129788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:08:04.948012 env[1408]: time="2024-02-09T19:08:04.947928320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:08:04.949090 env[1408]: time="2024-02-09T19:08:04.949054826Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:08:04.951546 env[1408]: time="2024-02-09T19:08:04.951515039Z" level=info msg="CreateContainer within sandbox \"c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:08:04.987865 env[1408]: time="2024-02-09T19:08:04.987832235Z" level=info msg="CreateContainer within sandbox \"c1f2d6d1461c902a631cecd636cb73d0065e719041ab61b8f4b319d31b734e6d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b024e0d83dcd66719d2ab0284257efe9cf7947b88d4f5e2a893da02d389df8f7\"" Feb 9 19:08:04.988267 env[1408]: time="2024-02-09T19:08:04.988234238Z" level=info msg="StartContainer for \"b024e0d83dcd66719d2ab0284257efe9cf7947b88d4f5e2a893da02d389df8f7\"" Feb 9 19:08:05.051365 env[1408]: time="2024-02-09T19:08:05.051322176Z" level=info msg="StartContainer for \"b024e0d83dcd66719d2ab0284257efe9cf7947b88d4f5e2a893da02d389df8f7\" returns successfully" Feb 9 19:08:05.669178 kubelet[2026]: E0209 19:08:05.669113 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:06.026942 systemd-networkd[1556]: cali5ec59c6bf6e: Gained IPv6LL Feb 9 19:08:06.670081 kubelet[2026]: E0209 19:08:06.670021 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:07.670474 kubelet[2026]: E0209 19:08:07.670416 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:08.671140 kubelet[2026]: E0209 19:08:08.671076 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:09.598673 kubelet[2026]: E0209 19:08:09.598583 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:09.672271 kubelet[2026]: E0209 19:08:09.672214 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:10.672439 kubelet[2026]: E0209 19:08:10.672391 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:11.672824 kubelet[2026]: E0209 19:08:11.672767 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:12.673263 kubelet[2026]: E0209 19:08:12.673197 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:13.674116 kubelet[2026]: E0209 19:08:13.674057 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:14.675158 kubelet[2026]: E0209 19:08:14.675121 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:15.675991 kubelet[2026]: E0209 19:08:15.675932 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:16.677024 kubelet[2026]: E0209 19:08:16.676956 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:17.677739 kubelet[2026]: E0209 19:08:17.677675 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:18.678795 kubelet[2026]: E0209 19:08:18.678739 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:19.678968 kubelet[2026]: E0209 19:08:19.678907 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:20.679844 kubelet[2026]: E0209 19:08:20.679785 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:21.680939 kubelet[2026]: E0209 19:08:21.680863 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:22.681958 kubelet[2026]: E0209 19:08:22.681893 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:23.682922 kubelet[2026]: E0209 19:08:23.682857 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:24.619575 kubelet[2026]: E0209 19:08:24.619465 2026 controller.go:189] failed to update lease, error: Put "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:24.683711 kubelet[2026]: E0209 19:08:24.683612 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:25.114540 systemd[1]: run-containerd-runc-k8s.io-f71f74790aea13df56ff40e9c3a29c105c1aed9588d0cb3ff57e8128a3d919b3-runc.qm0prR.mount: Deactivated successfully. Feb 9 19:08:25.684884 kubelet[2026]: E0209 19:08:25.684808 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:26.685939 kubelet[2026]: E0209 19:08:26.685875 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:27.686963 kubelet[2026]: E0209 19:08:27.686895 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:28.687843 kubelet[2026]: E0209 19:08:28.687770 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:28.930671 kubelet[2026]: E0209 19:08:28.930596 2026 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.37:34414->10.200.8.20:2379: read: connection timed out Feb 9 19:08:29.599072 kubelet[2026]: E0209 19:08:29.598996 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:29.688108 kubelet[2026]: E0209 19:08:29.688049 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:30.688769 kubelet[2026]: E0209 19:08:30.688703 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:31.689509 kubelet[2026]: E0209 19:08:31.689444 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:32.690260 kubelet[2026]: E0209 19:08:32.690196 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:33.691067 kubelet[2026]: E0209 19:08:33.691006 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:34.691971 kubelet[2026]: E0209 19:08:34.691910 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:35.692958 kubelet[2026]: E0209 19:08:35.692858 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:36.693506 kubelet[2026]: E0209 19:08:36.693438 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:37.694653 kubelet[2026]: E0209 19:08:37.694579 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:38.695111 kubelet[2026]: E0209 19:08:38.695047 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:38.931036 kubelet[2026]: E0209 19:08:38.930834 2026 controller.go:189] failed to update lease, error: Put "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:39.695896 kubelet[2026]: E0209 19:08:39.695840 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:40.696505 kubelet[2026]: E0209 19:08:40.696431 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:41.696636 kubelet[2026]: E0209 19:08:41.696569 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:42.697331 kubelet[2026]: E0209 19:08:42.697262 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:43.698205 kubelet[2026]: E0209 19:08:43.698139 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:44.698875 kubelet[2026]: E0209 19:08:44.698818 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:45.699670 kubelet[2026]: E0209 19:08:45.699561 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:46.700608 kubelet[2026]: E0209 19:08:46.700545 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:47.701552 kubelet[2026]: E0209 19:08:47.701462 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:48.702703 kubelet[2026]: E0209 19:08:48.702622 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:48.931676 kubelet[2026]: E0209 19:08:48.931575 2026 controller.go:189] failed to update lease, error: Put "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:49.598850 kubelet[2026]: E0209 19:08:49.598789 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:49.703285 kubelet[2026]: E0209 19:08:49.703219 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:50.703605 kubelet[2026]: E0209 19:08:50.703548 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:51.704159 kubelet[2026]: E0209 19:08:51.704096 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:52.705059 kubelet[2026]: E0209 19:08:52.704991 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:53.706854 kubelet[2026]: E0209 19:08:53.706792 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:54.707571 kubelet[2026]: E0209 19:08:54.707504 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:55.219928 kubelet[2026]: E0209 19:08:55.219890 2026 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.48\": Get \"https://10.200.8.37:6443/api/v1/nodes/10.200.8.48?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:08:55.708543 kubelet[2026]: E0209 19:08:55.708475 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:56.709333 kubelet[2026]: E0209 19:08:56.709264 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:57.461097 update_engine[1366]: I0209 19:08:57.461005 1366 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:08:57.461097 update_engine[1366]: I0209 19:08:57.461094 1366 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:08:57.461947 update_engine[1366]: I0209 19:08:57.461473 1366 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:08:57.462112 update_engine[1366]: I0209 19:08:57.462055 1366 omaha_request_params.cc:62] Current group set to lts Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462267 1366 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462283 1366 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462304 1366 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462342 1366 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462441 1366 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462450 1366 omaha_request_action.cc:271] Request: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: Feb 9 19:08:57.462683 update_engine[1366]: I0209 19:08:57.462456 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:08:57.464488 locksmithd[1475]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:08:57.464710 update_engine[1366]: I0209 19:08:57.464061 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:08:57.464710 update_engine[1366]: I0209 19:08:57.464397 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:08:57.486315 update_engine[1366]: E0209 19:08:57.486279 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:08:57.486455 update_engine[1366]: I0209 19:08:57.486415 1366 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:08:57.710515 kubelet[2026]: E0209 19:08:57.710445 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:58.710798 kubelet[2026]: E0209 19:08:58.710700 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:58.932552 kubelet[2026]: E0209 19:08:58.932486 2026 controller.go:189] failed to update lease, error: Put "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:58.932552 kubelet[2026]: I0209 19:08:58.932551 2026 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 9 19:08:59.711979 kubelet[2026]: E0209 19:08:59.711885 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:00.712956 kubelet[2026]: E0209 19:09:00.712895 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:01.714007 kubelet[2026]: E0209 19:09:01.713907 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:02.714784 kubelet[2026]: E0209 19:09:02.714719 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:03.715056 kubelet[2026]: E0209 19:09:03.714927 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:04.715338 kubelet[2026]: E0209 19:09:04.715279 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:05.220623 kubelet[2026]: E0209 19:09:05.220566 2026 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.48\": Get \"https://10.200.8.37:6443/api/v1/nodes/10.200.8.48?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:09:05.715900 kubelet[2026]: E0209 19:09:05.715795 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:06.586716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.587130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.587333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.597903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.611360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.611602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.622292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.622525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.641363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.641614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.646976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.652448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.670935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.671172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.676232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.681682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.700250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.700474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.711024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.711223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.716210 kubelet[2026]: E0209 19:09:06.716133 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:06.730430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.730627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.741264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.741472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.751996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.752189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.762625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.762822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.773592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.773799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.778731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.784141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.800489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.800709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.800851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.811149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.817213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.817417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.827835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.828036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.838667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.838865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.843847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.849294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.865220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.865417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.865557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.875876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.881959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.882152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.892623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.892820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.905365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.905577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.916228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.916451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.932316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.932517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.932658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.942917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.949085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.949280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.959654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.959850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.970497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.970719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.976075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.981461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.997478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.997674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:06.997808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.008112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.013891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.014083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.018981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.032755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.042284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.042492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.047165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.052516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.068981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.069186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.069323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.079975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.080290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.085539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.096450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.096662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.108665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.108891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.119501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.119695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.130585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.130776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.141426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.141632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.159094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.159286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.159465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.169919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.214331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.214738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.215911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.232867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.233107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.233244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.243631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.243845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.254513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.254723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.265582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.306743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.306902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.307877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.317149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.317426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.328018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.328260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.338914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.339115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.350154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.362310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.362549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.362690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.373441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.373662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.382872 update_engine[1366]: I0209 19:09:07.382412 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:09:07.382872 update_engine[1366]: I0209 19:09:07.382629 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:09:07.382872 update_engine[1366]: I0209 19:09:07.382838 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:09:07.384274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.384495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.395130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.395386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.400805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.404451 update_engine[1366]: E0209 19:09:07.404310 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:09:07.404451 update_engine[1366]: I0209 19:09:07.404425 1366 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:09:07.413400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.413618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.424654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.424865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.435837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.436058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.447189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.447407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.451959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.456708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.467638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.467836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.478466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.478668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.493601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.493811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.493953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.502972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.503181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.513304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.513511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.524241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.530165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.530485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.539668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.539886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.549396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.549603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.558962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.559165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.573550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.573745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.573889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.583001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.583214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.592581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.592796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.602099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.607668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.607899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.617293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.617511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.622050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.626821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.636051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.636276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.648617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.648835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.653307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.658226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.667644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.667849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.677239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.677448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.686830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.687068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.696435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.696640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.705948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.706149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.715418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.715620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.716560 kubelet[2026]: E0209 19:09:07.716503 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:07.729794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.730019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.730169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.739404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.739599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.748565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.748770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.757929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.767903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.768124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.773524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.779148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.784720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.790338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.795873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.801462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.818329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.818601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.818746 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.831478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.831655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.844231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.844411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.857384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.865060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.868198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.871057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.879960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.885049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.891198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.906509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.912423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.929116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.929323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.929491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.939338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.939556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.949027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.949241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.958976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.964681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.964873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.974489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.974709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.979153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.983874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.988467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:07.993565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.008250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.008467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.008608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.018486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.018691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.030137 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.030336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.041890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.042113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.047596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.059243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.059464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.069710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.069911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.081056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.081254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.093034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.093261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.098543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.104116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.114978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.115175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.126243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.126443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.142881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.143080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.143215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.155082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.155294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.165899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.166115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.176802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.183336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.183539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.188786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.194308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.205361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.205553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.216653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.216844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.233478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.233686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.233830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.238847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.244488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.255684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.255884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.266766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.279211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.279456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.279601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.290011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.290219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.300726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.300950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.311586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.311826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.317120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.328174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.328389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.339190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.339401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.350273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.350504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.362052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.362258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.372710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.372906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.382828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.383017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.393721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.393927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.404971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.405189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.415556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.415741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.426732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.426933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.437801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.438003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.449810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.450011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.455220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.460716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.471726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.471927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.476997 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.482780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.499390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.499611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.499756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.510482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.510674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.521400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.521609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.532733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.544564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.544782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.544912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.555737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.555949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.566871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.567076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.577890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.578108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.583427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.594938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.595210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.606343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.606616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.617557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.617762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.629289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.629509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.634753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.640644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.646092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.651436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.665110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.670613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.687566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.687773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.687908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.698516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.698730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.709425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.709638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.717645 kubelet[2026]: E0209 19:09:08.717583 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:09:08.720351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.726560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.726757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.732184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.737738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.743134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.748757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.759742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.759927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.776687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.776926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.777075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.790799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.791037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.801753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.801949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.812688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.818765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.818974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.830329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.830561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.841131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.841387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.852191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.852411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.863216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.863439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.874194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.874489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.885036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.885233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.895921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.896127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.911321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.911528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.924437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.924660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.932947 kubelet[2026]: E0209 19:09:08.932912 2026 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:09:08.933876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.934087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.945051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.945254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.956297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:09:08.956520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001