Feb 12 19:40:46.044974 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:40:46.044999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:46.045008 kernel: BIOS-provided physical RAM map: Feb 12 19:40:46.045016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:40:46.045022 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 12 19:40:46.045027 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 12 19:40:46.045037 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 12 19:40:46.045044 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 12 19:40:46.045050 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 12 19:40:46.045056 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 12 19:40:46.045063 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 12 19:40:46.045070 kernel: printk: bootconsole [earlyser0] enabled Feb 12 19:40:46.045076 kernel: NX (Execute Disable) protection: active Feb 12 19:40:46.045081 kernel: efi: EFI v2.70 by Microsoft Feb 12 19:40:46.045093 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 12 19:40:46.045100 kernel: random: crng init done Feb 12 19:40:46.045106 kernel: SMBIOS 3.1.0 present. Feb 12 19:40:46.045115 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:40:46.046937 kernel: Hypervisor detected: Microsoft Hyper-V Feb 12 19:40:46.046960 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 12 19:40:46.046973 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 12 19:40:46.046984 kernel: Hyper-V: Nested features: 0x1e0101 Feb 12 19:40:46.047001 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 12 19:40:46.047013 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 12 19:40:46.047025 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 12 19:40:46.047037 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 12 19:40:46.047049 kernel: tsc: Detected 2593.905 MHz processor Feb 12 19:40:46.047062 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:40:46.047074 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:40:46.047086 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 12 19:40:46.047099 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:40:46.047111 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 12 19:40:46.047135 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 12 19:40:46.047146 kernel: Using GB pages for direct mapping Feb 12 19:40:46.047158 kernel: Secure boot disabled Feb 12 19:40:46.047170 kernel: ACPI: Early table checksum verification disabled Feb 12 19:40:46.047182 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 12 19:40:46.047194 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047206 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047218 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:40:46.047238 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 12 19:40:46.047251 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047264 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047277 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047290 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047303 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047319 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047331 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:46.047344 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 12 19:40:46.047357 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 12 19:40:46.047370 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 12 19:40:46.047383 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 12 19:40:46.047396 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 12 19:40:46.047409 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 12 19:40:46.047424 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 12 19:40:46.047437 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 12 19:40:46.047450 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 12 19:40:46.047463 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 12 19:40:46.047476 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:40:46.047489 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:40:46.047502 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 12 19:40:46.047515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 12 19:40:46.047528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 12 19:40:46.047543 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 12 19:40:46.047556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 12 19:40:46.047569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 12 19:40:46.047582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 12 19:40:46.047595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 12 19:40:46.047607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 12 19:40:46.047620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 12 19:40:46.047634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 12 19:40:46.047646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 12 19:40:46.047661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 12 19:40:46.047674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 12 19:40:46.047687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 12 19:40:46.047700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 12 19:40:46.047713 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 12 19:40:46.047726 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 12 19:40:46.047739 kernel: Zone ranges: Feb 12 19:40:46.047752 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:40:46.047765 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 19:40:46.047781 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:40:46.047794 kernel: Movable zone start for each node Feb 12 19:40:46.047807 kernel: Early memory node ranges Feb 12 19:40:46.047820 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:40:46.047833 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 12 19:40:46.047846 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 12 19:40:46.047858 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:40:46.047871 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 12 19:40:46.047884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:40:46.047899 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:40:46.047912 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 12 19:40:46.047925 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 12 19:40:46.047938 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 12 19:40:46.047951 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:40:46.047964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:40:46.047977 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:40:46.047989 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 12 19:40:46.048002 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:40:46.048017 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 12 19:40:46.048031 kernel: Booting paravirtualized kernel on Hyper-V Feb 12 19:40:46.048044 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:40:46.048057 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:40:46.048069 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:40:46.048082 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:40:46.048095 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:40:46.048107 kernel: Hyper-V: PV spinlocks enabled Feb 12 19:40:46.048120 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:40:46.048144 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 12 19:40:46.048157 kernel: Policy zone: Normal Feb 12 19:40:46.048171 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:46.048185 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:40:46.048197 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 19:40:46.048210 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:40:46.048223 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:40:46.048237 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 12 19:40:46.048252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:40:46.048265 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:40:46.048288 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:40:46.048304 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:40:46.048318 kernel: rcu: RCU event tracing is enabled. Feb 12 19:40:46.048332 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:40:46.048345 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:40:46.048359 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:40:46.048373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:40:46.048387 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:40:46.048400 kernel: Using NULL legacy PIC Feb 12 19:40:46.048416 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 12 19:40:46.048430 kernel: Console: colour dummy device 80x25 Feb 12 19:40:46.048443 kernel: printk: console [tty1] enabled Feb 12 19:40:46.048457 kernel: printk: console [ttyS0] enabled Feb 12 19:40:46.048471 kernel: printk: bootconsole [earlyser0] disabled Feb 12 19:40:46.048486 kernel: ACPI: Core revision 20210730 Feb 12 19:40:46.048500 kernel: Failed to register legacy timer interrupt Feb 12 19:40:46.048514 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:40:46.048527 kernel: Hyper-V: Using IPI hypercalls Feb 12 19:40:46.048541 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 12 19:40:46.048554 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 19:40:46.048568 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 19:40:46.048581 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:40:46.048595 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:40:46.048608 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:40:46.048625 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:40:46.048639 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 19:40:46.048652 kernel: RETBleed: Vulnerable Feb 12 19:40:46.048665 kernel: Speculative Store Bypass: Vulnerable Feb 12 19:40:46.048679 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:46.048692 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:46.048706 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 19:40:46.048719 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:40:46.048733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:40:46.048747 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:40:46.048762 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 19:40:46.048776 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 19:40:46.048789 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 19:40:46.048803 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:40:46.048816 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 12 19:40:46.048830 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 12 19:40:46.048843 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 12 19:40:46.048856 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 12 19:40:46.048870 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:40:46.048883 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:40:46.048897 kernel: LSM: Security Framework initializing Feb 12 19:40:46.048910 kernel: SELinux: Initializing. Feb 12 19:40:46.048926 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:46.048940 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:46.048954 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 19:40:46.048968 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 19:40:46.048981 kernel: signal: max sigframe size: 3632 Feb 12 19:40:46.048994 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:40:46.049008 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:40:46.049022 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:40:46.049036 kernel: x86: Booting SMP configuration: Feb 12 19:40:46.049049 kernel: .... node #0, CPUs: #1 Feb 12 19:40:46.049066 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 12 19:40:46.049080 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 19:40:46.049094 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:40:46.049107 kernel: smpboot: Max logical packages: 1 Feb 12 19:40:46.049121 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 12 19:40:46.049144 kernel: devtmpfs: initialized Feb 12 19:40:46.049158 kernel: x86/mm: Memory block size: 128MB Feb 12 19:40:46.049172 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 12 19:40:46.049188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:40:46.049202 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:40:46.049216 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:40:46.049230 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:40:46.049244 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:40:46.049258 kernel: audit: type=2000 audit(1707766844.023:1): state=initialized audit_enabled=0 res=1 Feb 12 19:40:46.049271 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:40:46.049285 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:40:46.049299 kernel: cpuidle: using governor menu Feb 12 19:40:46.049315 kernel: ACPI: bus type PCI registered Feb 12 19:40:46.049328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:40:46.049342 kernel: dca service started, version 1.12.1 Feb 12 19:40:46.049356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:40:46.049370 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:40:46.049383 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:40:46.049397 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:40:46.049411 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:40:46.049424 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:40:46.049439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:40:46.049453 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:40:46.049466 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:40:46.049480 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:40:46.049494 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:40:46.049507 kernel: ACPI: Interpreter enabled Feb 12 19:40:46.049521 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:40:46.049534 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:40:46.049548 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:40:46.049563 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 12 19:40:46.049576 kernel: iommu: Default domain type: Translated Feb 12 19:40:46.049590 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:40:46.049603 kernel: vgaarb: loaded Feb 12 19:40:46.049617 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:40:46.049631 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:40:46.049645 kernel: PTP clock support registered Feb 12 19:40:46.049658 kernel: Registered efivars operations Feb 12 19:40:46.049672 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:40:46.049686 kernel: PCI: System does not support PCI Feb 12 19:40:46.049701 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 12 19:40:46.049715 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:40:46.049729 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:40:46.049742 kernel: pnp: PnP ACPI init Feb 12 19:40:46.049756 kernel: pnp: PnP ACPI: found 3 devices Feb 12 19:40:46.049769 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:40:46.049783 kernel: NET: Registered PF_INET protocol family Feb 12 19:40:46.049797 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:40:46.049813 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 19:40:46.049827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:40:46.049841 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:40:46.049854 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 19:40:46.049868 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 19:40:46.049881 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:40:46.049895 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:40:46.049908 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:40:46.049921 kernel: NET: Registered PF_XDP protocol family Feb 12 19:40:46.049939 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:40:46.049952 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 19:40:46.049966 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 12 19:40:46.049979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:40:46.049997 kernel: Initialise system trusted keyrings Feb 12 19:40:46.050011 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 19:40:46.050024 kernel: Key type asymmetric registered Feb 12 19:40:46.050038 kernel: Asymmetric key parser 'x509' registered Feb 12 19:40:46.050051 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:40:46.050068 kernel: io scheduler mq-deadline registered Feb 12 19:40:46.050081 kernel: io scheduler kyber registered Feb 12 19:40:46.050095 kernel: io scheduler bfq registered Feb 12 19:40:46.050109 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:40:46.050134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:40:46.050148 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:40:46.050158 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 19:40:46.050167 kernel: i8042: PNP: No PS/2 controller found. Feb 12 19:40:46.050308 kernel: rtc_cmos 00:02: registered as rtc0 Feb 12 19:40:46.050383 kernel: rtc_cmos 00:02: setting system clock to 2024-02-12T19:40:45 UTC (1707766845) Feb 12 19:40:46.050468 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 12 19:40:46.050481 kernel: fail to initialize ptp_kvm Feb 12 19:40:46.050492 kernel: intel_pstate: CPU model not supported Feb 12 19:40:46.050503 kernel: efifb: probing for efifb Feb 12 19:40:46.050513 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:40:46.050525 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:40:46.050537 kernel: efifb: scrolling: redraw Feb 12 19:40:46.050553 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:40:46.050564 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:40:46.050575 kernel: fb0: EFI VGA frame buffer device Feb 12 19:40:46.050587 kernel: pstore: Registered efi as persistent store backend Feb 12 19:40:46.050595 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:40:46.050605 kernel: Segment Routing with IPv6 Feb 12 19:40:46.050614 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:40:46.050623 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:40:46.050633 kernel: Key type dns_resolver registered Feb 12 19:40:46.050643 kernel: IPI shorthand broadcast: enabled Feb 12 19:40:46.050652 kernel: sched_clock: Marking stable (779659700, 24678600)->(1018142700, -213804400) Feb 12 19:40:46.050661 kernel: registered taskstats version 1 Feb 12 19:40:46.050670 kernel: Loading compiled-in X.509 certificates Feb 12 19:40:46.050680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:40:46.050687 kernel: Key type .fscrypt registered Feb 12 19:40:46.050695 kernel: Key type fscrypt-provisioning registered Feb 12 19:40:46.050704 kernel: pstore: Using crash dump compression: deflate Feb 12 19:40:46.050717 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:40:46.050725 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:40:46.050732 kernel: ima: No architecture policies found Feb 12 19:40:46.050743 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:40:46.050750 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:40:46.050761 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:40:46.050768 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:40:46.050776 kernel: Run /init as init process Feb 12 19:40:46.050786 kernel: with arguments: Feb 12 19:40:46.050794 kernel: /init Feb 12 19:40:46.050805 kernel: with environment: Feb 12 19:40:46.050812 kernel: HOME=/ Feb 12 19:40:46.050821 kernel: TERM=linux Feb 12 19:40:46.050829 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:40:46.050841 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:40:46.050851 systemd[1]: Detected virtualization microsoft. Feb 12 19:40:46.050860 systemd[1]: Detected architecture x86-64. Feb 12 19:40:46.050872 systemd[1]: Running in initrd. Feb 12 19:40:46.050882 systemd[1]: No hostname configured, using default hostname. Feb 12 19:40:46.050890 systemd[1]: Hostname set to . Feb 12 19:40:46.050899 systemd[1]: Initializing machine ID from random generator. Feb 12 19:40:46.050909 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:40:46.050917 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:40:46.050927 systemd[1]: Reached target cryptsetup.target. Feb 12 19:40:46.050935 systemd[1]: Reached target paths.target. Feb 12 19:40:46.050944 systemd[1]: Reached target slices.target. Feb 12 19:40:46.050955 systemd[1]: Reached target swap.target. Feb 12 19:40:46.050965 systemd[1]: Reached target timers.target. Feb 12 19:40:46.050973 systemd[1]: Listening on iscsid.socket. Feb 12 19:40:46.050981 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:40:46.050992 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:40:46.051000 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:40:46.051010 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:40:46.051020 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:40:46.051030 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:40:46.051038 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:40:46.051049 systemd[1]: Reached target sockets.target. Feb 12 19:40:46.051057 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:40:46.051065 systemd[1]: Finished network-cleanup.service. Feb 12 19:40:46.051075 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:40:46.051083 systemd[1]: Starting systemd-journald.service... Feb 12 19:40:46.051093 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:40:46.051103 systemd[1]: Starting systemd-resolved.service... Feb 12 19:40:46.051113 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:40:46.051121 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:40:46.061919 kernel: audit: type=1130 audit(1707766846.050:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.061939 systemd-journald[183]: Journal started Feb 12 19:40:46.062012 systemd-journald[183]: Runtime Journal (/run/log/journal/24fc718831c84091a3b0a092e5782f25) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:40:46.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.045190 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:40:46.068060 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:40:46.079141 systemd[1]: Started systemd-journald.service. Feb 12 19:40:46.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.091155 kernel: audit: type=1130 audit(1707766846.074:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.106276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:40:46.102663 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:40:46.106229 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:40:46.115698 kernel: Bridge firewalling registered Feb 12 19:40:46.114227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:40:46.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.131982 kernel: audit: type=1130 audit(1707766846.102:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.132042 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:40:46.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.158581 kernel: audit: type=1130 audit(1707766846.104:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.156244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:40:46.169719 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:40:46.169777 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:40:46.200053 kernel: SCSI subsystem initialized Feb 12 19:40:46.200087 kernel: audit: type=1130 audit(1707766846.156:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.174290 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:40:46.179365 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:40:46.179406 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:40:46.230425 kernel: audit: type=1130 audit(1707766846.172:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.184937 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:40:46.246146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:40:46.246195 dracut-cmdline[200]: dracut-dracut-053 Feb 12 19:40:46.246195 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:46.282389 kernel: audit: type=1130 audit(1707766846.245:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.282424 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:40:46.282448 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:40:46.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.242907 systemd[1]: Started systemd-resolved.service. Feb 12 19:40:46.245200 systemd[1]: Reached target nss-lookup.target. Feb 12 19:40:46.290618 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:40:46.293894 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:40:46.313457 kernel: audit: type=1130 audit(1707766846.296:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.308653 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:40:46.323978 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:40:46.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.342168 kernel: audit: type=1130 audit(1707766846.328:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.364148 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:40:46.377145 kernel: iscsi: registered transport (tcp) Feb 12 19:40:46.402824 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:40:46.402900 kernel: QLogic iSCSI HBA Driver Feb 12 19:40:46.432209 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:40:46.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.437625 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:40:46.489153 kernel: raid6: avx512x4 gen() 18279 MB/s Feb 12 19:40:46.509138 kernel: raid6: avx512x4 xor() 8529 MB/s Feb 12 19:40:46.529137 kernel: raid6: avx512x2 gen() 18354 MB/s Feb 12 19:40:46.549143 kernel: raid6: avx512x2 xor() 29913 MB/s Feb 12 19:40:46.570136 kernel: raid6: avx512x1 gen() 18325 MB/s Feb 12 19:40:46.590137 kernel: raid6: avx512x1 xor() 27003 MB/s Feb 12 19:40:46.610140 kernel: raid6: avx2x4 gen() 18317 MB/s Feb 12 19:40:46.630136 kernel: raid6: avx2x4 xor() 7898 MB/s Feb 12 19:40:46.650136 kernel: raid6: avx2x2 gen() 18301 MB/s Feb 12 19:40:46.671141 kernel: raid6: avx2x2 xor() 22338 MB/s Feb 12 19:40:46.691137 kernel: raid6: avx2x1 gen() 13959 MB/s Feb 12 19:40:46.711136 kernel: raid6: avx2x1 xor() 19549 MB/s Feb 12 19:40:46.731139 kernel: raid6: sse2x4 gen() 11762 MB/s Feb 12 19:40:46.751136 kernel: raid6: sse2x4 xor() 7312 MB/s Feb 12 19:40:46.771136 kernel: raid6: sse2x2 gen() 12739 MB/s Feb 12 19:40:46.791138 kernel: raid6: sse2x2 xor() 7543 MB/s Feb 12 19:40:46.811136 kernel: raid6: sse2x1 gen() 11689 MB/s Feb 12 19:40:46.835254 kernel: raid6: sse2x1 xor() 5905 MB/s Feb 12 19:40:46.835276 kernel: raid6: using algorithm avx512x2 gen() 18354 MB/s Feb 12 19:40:46.835289 kernel: raid6: .... xor() 29913 MB/s, rmw enabled Feb 12 19:40:46.838776 kernel: raid6: using avx512x2 recovery algorithm Feb 12 19:40:46.858148 kernel: xor: automatically using best checksumming function avx Feb 12 19:40:46.954149 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:40:46.962742 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:40:46.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.966000 audit: BPF prog-id=7 op=LOAD Feb 12 19:40:46.967000 audit: BPF prog-id=8 op=LOAD Feb 12 19:40:46.967672 systemd[1]: Starting systemd-udevd.service... Feb 12 19:40:46.983029 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:40:46.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:46.990113 systemd[1]: Started systemd-udevd.service. Feb 12 19:40:46.993750 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:40:47.014182 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Feb 12 19:40:47.046187 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:40:47.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.052242 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:40:47.085940 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:40:47.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.142154 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:40:47.173677 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:40:47.173741 kernel: AES CTR mode by8 optimization enabled Feb 12 19:40:47.177096 kernel: hv_vmbus: Vmbus version:5.2 Feb 12 19:40:47.185150 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:40:47.206152 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:40:47.213163 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:40:47.213210 kernel: scsi host1: storvsc_host_t Feb 12 19:40:47.218369 kernel: scsi host0: storvsc_host_t Feb 12 19:40:47.233263 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:40:47.233360 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:40:47.239352 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:40:47.239413 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:40:47.254151 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:40:47.261144 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:40:47.269154 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:40:47.287045 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:40:47.287347 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:40:47.294148 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:40:47.294347 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:40:47.294472 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:40:47.299906 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:40:47.300065 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:40:47.306158 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:40:47.314145 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:47.318142 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:40:47.393809 kernel: hv_netvsc 000d3a66-c56d-000d-3a66-c56d000d3a66 eth0: VF slot 1 added Feb 12 19:40:47.403142 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:40:47.410144 kernel: hv_pci eff4d468-87da-40ee-aa9f-2e6db66f5c76: PCI VMBus probing: Using version 0x10004 Feb 12 19:40:47.427559 kernel: hv_pci eff4d468-87da-40ee-aa9f-2e6db66f5c76: PCI host bridge to bus 87da:00 Feb 12 19:40:47.427818 kernel: pci_bus 87da:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 12 19:40:47.427963 kernel: pci_bus 87da:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:40:47.438426 kernel: pci 87da:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 12 19:40:47.449214 kernel: pci 87da:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:40:47.466217 kernel: pci 87da:00:02.0: enabling Extended Tags Feb 12 19:40:47.482167 kernel: pci 87da:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 87da:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 12 19:40:47.491213 kernel: pci_bus 87da:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:40:47.491400 kernel: pci 87da:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:40:47.585149 kernel: mlx5_core 87da:00:02.0: firmware version: 14.30.1224 Feb 12 19:40:47.728097 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:40:47.755148 kernel: mlx5_core 87da:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 12 19:40:47.755371 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (442) Feb 12 19:40:47.770811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:40:47.908437 kernel: mlx5_core 87da:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 12 19:40:47.908638 kernel: mlx5_core 87da:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 12 19:40:47.924776 kernel: hv_netvsc 000d3a66-c56d-000d-3a66-c56d000d3a66 eth0: VF registering: eth1 Feb 12 19:40:47.924940 kernel: mlx5_core 87da:00:02.0 eth1: joined to eth0 Feb 12 19:40:47.926033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:40:47.927185 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:40:47.928476 systemd[1]: Starting disk-uuid.service... Feb 12 19:40:47.957145 kernel: mlx5_core 87da:00:02.0 enP34778s1: renamed from eth1 Feb 12 19:40:47.997226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:40:48.949890 disk-uuid[561]: The operation has completed successfully. Feb 12 19:40:48.952520 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:49.014569 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:40:49.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.014669 systemd[1]: Finished disk-uuid.service. Feb 12 19:40:49.029866 systemd[1]: Starting verity-setup.service... Feb 12 19:40:49.065182 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:40:49.268573 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:40:49.273303 systemd[1]: Finished verity-setup.service. Feb 12 19:40:49.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.278356 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:40:49.352174 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:40:49.352284 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:40:49.356140 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:40:49.361655 systemd[1]: Starting ignition-setup.service... Feb 12 19:40:49.366269 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:40:49.393587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:49.393629 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:49.393646 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:49.433816 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:40:49.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.438000 audit: BPF prog-id=9 op=LOAD Feb 12 19:40:49.439387 systemd[1]: Starting systemd-networkd.service... Feb 12 19:40:49.465558 systemd-networkd[832]: lo: Link UP Feb 12 19:40:49.466992 systemd-networkd[832]: lo: Gained carrier Feb 12 19:40:49.469039 systemd-networkd[832]: Enumeration completed Feb 12 19:40:49.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.469119 systemd[1]: Started systemd-networkd.service. Feb 12 19:40:49.471662 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:40:49.472063 systemd[1]: Reached target network.target. Feb 12 19:40:49.475910 systemd[1]: Starting iscsiuio.service... Feb 12 19:40:49.487954 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:40:49.490422 systemd[1]: Started iscsiuio.service. Feb 12 19:40:49.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.494899 systemd[1]: Starting iscsid.service... Feb 12 19:40:49.501264 iscsid[841]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:49.501264 iscsid[841]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:40:49.501264 iscsid[841]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:40:49.501264 iscsid[841]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:40:49.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.529064 iscsid[841]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:49.529064 iscsid[841]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:40:49.521209 systemd[1]: Started iscsid.service. Feb 12 19:40:49.526934 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:40:49.544851 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:40:49.551214 kernel: mlx5_core 87da:00:02.0 enP34778s1: Link up Feb 12 19:40:49.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.548971 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:40:49.555452 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:40:49.560121 systemd[1]: Reached target remote-fs.target. Feb 12 19:40:49.564878 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:40:49.572697 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:40:49.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.618151 kernel: hv_netvsc 000d3a66-c56d-000d-3a66-c56d000d3a66 eth0: Data path switched to VF: enP34778s1 Feb 12 19:40:49.624254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:40:49.623775 systemd-networkd[832]: enP34778s1: Link UP Feb 12 19:40:49.623906 systemd-networkd[832]: eth0: Link UP Feb 12 19:40:49.624098 systemd-networkd[832]: eth0: Gained carrier Feb 12 19:40:49.634289 systemd-networkd[832]: enP34778s1: Gained carrier Feb 12 19:40:49.662592 systemd[1]: Finished ignition-setup.service. Feb 12 19:40:49.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.666113 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:40:49.673046 systemd-networkd[832]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:40:50.731372 systemd-networkd[832]: eth0: Gained IPv6LL Feb 12 19:40:52.699644 ignition[856]: Ignition 2.14.0 Feb 12 19:40:52.699663 ignition[856]: Stage: fetch-offline Feb 12 19:40:52.699773 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:52.699827 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:52.818214 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:52.818389 ignition[856]: parsed url from cmdline: "" Feb 12 19:40:52.820889 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:40:52.831296 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 19:40:52.831340 kernel: audit: type=1130 audit(1707766852.826:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.818393 ignition[856]: no config URL provided Feb 12 19:40:52.827361 systemd[1]: Starting ignition-fetch.service... Feb 12 19:40:52.818399 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:52.818407 ignition[856]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:52.818414 ignition[856]: failed to fetch config: resource requires networking Feb 12 19:40:52.819742 ignition[856]: Ignition finished successfully Feb 12 19:40:52.835664 ignition[862]: Ignition 2.14.0 Feb 12 19:40:52.835670 ignition[862]: Stage: fetch Feb 12 19:40:52.835772 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:52.835795 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:52.838902 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:52.839830 ignition[862]: parsed url from cmdline: "" Feb 12 19:40:52.839836 ignition[862]: no config URL provided Feb 12 19:40:52.839843 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:52.839858 ignition[862]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:52.839890 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:40:52.872336 ignition[862]: GET result: OK Feb 12 19:40:52.880455 ignition[862]: config has been read from IMDS userdata Feb 12 19:40:52.880520 ignition[862]: parsing config with SHA512: 5d5ca9587045df8b68738c4566f3428288cf6dc5d05c051b97964d4b2d93178a50a61d31dca6698e9684db55f862536b85f9bf26aca95ad94de3b0839ec4c030 Feb 12 19:40:52.914622 unknown[862]: fetched base config from "system" Feb 12 19:40:52.916072 unknown[862]: fetched base config from "system" Feb 12 19:40:52.916082 unknown[862]: fetched user config from "azure" Feb 12 19:40:52.923303 ignition[862]: fetch: fetch complete Feb 12 19:40:52.923314 ignition[862]: fetch: fetch passed Feb 12 19:40:52.923380 ignition[862]: Ignition finished successfully Feb 12 19:40:52.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.927423 systemd[1]: Finished ignition-fetch.service. Feb 12 19:40:52.945584 kernel: audit: type=1130 audit(1707766852.930:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.932163 systemd[1]: Starting ignition-kargs.service... Feb 12 19:40:52.952041 ignition[868]: Ignition 2.14.0 Feb 12 19:40:52.952051 ignition[868]: Stage: kargs Feb 12 19:40:52.952216 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:52.952251 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:52.956078 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:52.959002 ignition[868]: kargs: kargs passed Feb 12 19:40:52.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.962432 systemd[1]: Finished ignition-kargs.service. Feb 12 19:40:52.980120 kernel: audit: type=1130 audit(1707766852.964:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.960213 ignition[868]: Ignition finished successfully Feb 12 19:40:52.965408 systemd[1]: Starting ignition-disks.service... Feb 12 19:40:52.989981 ignition[874]: Ignition 2.14.0 Feb 12 19:40:52.989991 ignition[874]: Stage: disks Feb 12 19:40:52.990143 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:52.990179 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:52.995005 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:52.996418 ignition[874]: disks: disks passed Feb 12 19:40:53.018699 kernel: audit: type=1130 audit(1707766853.001:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.997251 systemd[1]: Finished ignition-disks.service. Feb 12 19:40:52.996459 ignition[874]: Ignition finished successfully Feb 12 19:40:53.002247 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:40:53.018690 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:40:53.022736 systemd[1]: Reached target local-fs.target. Feb 12 19:40:53.026846 systemd[1]: Reached target sysinit.target. Feb 12 19:40:53.028835 systemd[1]: Reached target basic.target. Feb 12 19:40:53.038069 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:40:53.092973 systemd-fsck[882]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 12 19:40:53.104267 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:40:53.123055 kernel: audit: type=1130 audit(1707766853.106:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.107807 systemd[1]: Mounting sysroot.mount... Feb 12 19:40:53.247140 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:40:53.247538 systemd[1]: Mounted sysroot.mount. Feb 12 19:40:53.249560 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:40:53.295343 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:40:53.304040 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:40:53.309291 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:40:53.309335 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:40:53.319219 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:40:53.353894 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:53.357799 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:40:53.374476 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (893) Feb 12 19:40:53.383588 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:53.383626 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:53.383639 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:53.392265 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:53.416371 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:40:53.450219 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:40:53.457997 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:40:53.464680 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:40:53.917437 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:40:53.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.920836 systemd[1]: Starting ignition-mount.service... Feb 12 19:40:53.941263 kernel: audit: type=1130 audit(1707766853.919:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.941983 systemd[1]: Starting sysroot-boot.service... Feb 12 19:40:53.947066 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:40:53.947224 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:40:53.966579 ignition[959]: INFO : Ignition 2.14.0 Feb 12 19:40:53.966579 ignition[959]: INFO : Stage: mount Feb 12 19:40:53.970955 ignition[959]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:53.970955 ignition[959]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:53.994813 kernel: audit: type=1130 audit(1707766853.976:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.994903 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:53.994903 ignition[959]: INFO : mount: mount passed Feb 12 19:40:53.994903 ignition[959]: INFO : Ignition finished successfully Feb 12 19:40:53.974220 systemd[1]: Finished ignition-mount.service. Feb 12 19:40:54.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.003165 systemd[1]: Finished sysroot-boot.service. Feb 12 19:40:54.021073 kernel: audit: type=1130 audit(1707766854.006:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.654149 coreos-metadata[892]: Feb 12 19:40:54.653 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:40:54.671952 coreos-metadata[892]: Feb 12 19:40:54.671 INFO Fetch successful Feb 12 19:40:54.707152 coreos-metadata[892]: Feb 12 19:40:54.707 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:40:54.719556 coreos-metadata[892]: Feb 12 19:40:54.719 INFO Fetch successful Feb 12 19:40:54.738926 coreos-metadata[892]: Feb 12 19:40:54.738 INFO wrote hostname ci-3510.3.2-a-c8dbf10a06 to /sysroot/etc/hostname Feb 12 19:40:54.744439 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:40:54.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.754606 systemd[1]: Starting ignition-files.service... Feb 12 19:40:54.768044 kernel: audit: type=1130 audit(1707766854.753:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.773818 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:54.790925 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (971) Feb 12 19:40:54.790965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:54.790979 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:54.798559 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:54.802805 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:54.817091 ignition[990]: INFO : Ignition 2.14.0 Feb 12 19:40:54.817091 ignition[990]: INFO : Stage: files Feb 12 19:40:54.820971 ignition[990]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:54.820971 ignition[990]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:54.835908 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:54.848981 ignition[990]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:40:54.871163 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:40:54.871163 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:40:54.914709 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:40:54.919515 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:40:54.960648 unknown[990]: wrote ssh authorized keys file for user: core Feb 12 19:40:54.964150 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:40:54.968286 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:54.977100 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:55.594679 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:40:55.748855 ignition[990]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:40:55.757326 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:55.757326 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:40:55.757326 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:56.261745 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:40:56.371669 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:40:56.378098 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:40:56.378098 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:40:56.378098 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:56.378098 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:40:56.889075 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:40:57.018965 ignition[990]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:40:57.027892 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:57.027892 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:40:57.037539 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:40:57.866861 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:41:52.613515 ignition[990]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:41:52.624243 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:41:52.624243 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:41:52.624243 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:41:52.739773 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:41:52.959420 ignition[990]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:41:52.968767 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:41:52.968767 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:41:52.968767 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:41:53.086073 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:41:53.406493 ignition[990]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:41:53.420657 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3143413429" Feb 12 19:41:53.500269 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (995) Feb 12 19:41:53.429205 systemd[1]: mnt-oem3143413429.mount: Deactivated successfully. Feb 12 19:41:53.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.514900 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3143413429": device or resource busy Feb 12 19:41:53.514900 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3143413429", trying btrfs: device or resource busy Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3143413429" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3143413429" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3143413429" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3143413429" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603401240" Feb 12 19:41:53.514900 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603401240": device or resource busy Feb 12 19:41:53.514900 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1603401240", trying btrfs: device or resource busy Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603401240" Feb 12 19:41:53.514900 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603401240" Feb 12 19:41:53.597131 kernel: audit: type=1130 audit(1707766913.500:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.439761 systemd[1]: mnt-oem1603401240.mount: Deactivated successfully. Feb 12 19:41:53.601791 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem1603401240" Feb 12 19:41:53.601791 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem1603401240" Feb 12 19:41:53.601791 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1e): [started] processing unit "containerd.service" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1e): op(1f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1e): op(1f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:41:53.601791 ignition[990]: INFO : files: op(1e): [finished] processing unit "containerd.service" Feb 12 19:41:53.749775 kernel: audit: type=1130 audit(1707766913.601:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.749811 kernel: audit: type=1131 audit(1707766913.601:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.749829 kernel: audit: type=1130 audit(1707766913.639:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.749849 kernel: audit: type=1130 audit(1707766913.704:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.749872 kernel: audit: type=1131 audit(1707766913.704:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.442234 systemd[1]: Finished ignition-files.service. Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(20): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(20): op(21): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(20): op(21): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(20): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(25): [started] setting preset to enabled for "waagent.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(25): [finished] setting preset to enabled for "waagent.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(26): [started] setting preset to enabled for "nvidia.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: op(26): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:41:53.751179 ignition[990]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:41:53.751179 ignition[990]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:41:53.751179 ignition[990]: INFO : files: files passed Feb 12 19:41:53.751179 ignition[990]: INFO : Ignition finished successfully Feb 12 19:41:53.535504 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:41:53.826828 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:41:53.545664 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:41:53.546392 systemd[1]: Starting ignition-quench.service... Feb 12 19:41:53.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.598877 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:41:53.856480 kernel: audit: type=1130 audit(1707766913.840:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.598963 systemd[1]: Finished ignition-quench.service. Feb 12 19:41:53.632081 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:41:53.639299 systemd[1]: Reached target ignition-complete.target. Feb 12 19:41:53.682625 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:41:53.702022 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:41:53.702118 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:41:53.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.704523 systemd[1]: Reached target initrd-fs.target. Feb 12 19:41:53.899914 kernel: audit: type=1130 audit(1707766913.872:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.899938 kernel: audit: type=1131 audit(1707766913.872:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.732257 systemd[1]: Reached target initrd.target. Feb 12 19:41:53.740367 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:41:53.741064 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:41:53.837093 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:41:53.859231 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:41:53.869745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:41:53.869822 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:41:53.885573 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:41:53.919965 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:41:53.924028 systemd[1]: Stopped target timers.target. Feb 12 19:41:53.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.928435 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:41:53.949578 kernel: audit: type=1131 audit(1707766913.932:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.928500 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:41:53.932591 systemd[1]: Stopped target initrd.target. Feb 12 19:41:53.949563 systemd[1]: Stopped target basic.target. Feb 12 19:41:53.951492 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:41:53.955871 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:41:53.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.958075 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:41:53.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.962155 systemd[1]: Stopped target remote-fs.target. Feb 12 19:41:53.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.964191 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:41:53.966383 systemd[1]: Stopped target sysinit.target. Feb 12 19:41:54.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.020420 iscsid[841]: iscsid shutting down. Feb 12 19:41:54.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.970272 systemd[1]: Stopped target local-fs.target. Feb 12 19:41:54.025760 ignition[1028]: INFO : Ignition 2.14.0 Feb 12 19:41:54.025760 ignition[1028]: INFO : Stage: umount Feb 12 19:41:54.025760 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:41:54.025760 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:41:54.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:53.972291 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:41:54.046753 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:41:54.046753 ignition[1028]: INFO : umount: umount passed Feb 12 19:41:54.046753 ignition[1028]: INFO : Ignition finished successfully Feb 12 19:41:53.976644 systemd[1]: Stopped target swap.target. Feb 12 19:41:53.978513 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:41:53.978570 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:41:53.982835 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:41:53.984999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:41:53.985046 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:41:53.987362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:41:53.987405 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:41:53.992232 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:41:53.992281 systemd[1]: Stopped ignition-files.service. Feb 12 19:41:53.994436 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:41:53.994476 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:41:53.999619 systemd[1]: Stopping ignition-mount.service... Feb 12 19:41:54.003290 systemd[1]: Stopping iscsid.service... Feb 12 19:41:54.005008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:41:54.005088 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:41:54.008320 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:41:54.010348 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:41:54.010420 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:41:54.013032 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:41:54.013098 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:41:54.018032 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:41:54.018168 systemd[1]: Stopped iscsid.service. Feb 12 19:41:54.025479 systemd[1]: Stopping iscsiuio.service... Feb 12 19:41:54.029920 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:41:54.030025 systemd[1]: Stopped iscsiuio.service. Feb 12 19:41:54.044775 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:41:54.046758 systemd[1]: Stopped ignition-mount.service. Feb 12 19:41:54.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.116080 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:41:54.118796 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:41:54.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.118854 systemd[1]: Stopped ignition-disks.service. Feb 12 19:41:54.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.123306 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:41:54.123355 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:41:54.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.127713 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:41:54.127760 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:41:54.132364 systemd[1]: Stopped target network.target. Feb 12 19:41:54.134709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:41:54.134764 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:41:54.139065 systemd[1]: Stopped target paths.target. Feb 12 19:41:54.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.141244 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:41:54.147162 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:41:54.149570 systemd[1]: Stopped target slices.target. Feb 12 19:41:54.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.151480 systemd[1]: Stopped target sockets.target. Feb 12 19:41:54.155387 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:41:54.155432 systemd[1]: Closed iscsid.socket. Feb 12 19:41:54.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.159976 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:41:54.160014 systemd[1]: Closed iscsiuio.socket. Feb 12 19:41:54.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.163933 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:41:54.199000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:41:54.163980 systemd[1]: Stopped ignition-setup.service. Feb 12 19:41:54.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.168224 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:41:54.172902 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:41:54.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.178059 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:41:54.178177 systemd-networkd[832]: eth0: DHCPv6 lease lost Feb 12 19:41:54.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.219000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:41:54.179592 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:41:54.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.182274 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:41:54.182360 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:41:54.190493 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:41:54.190585 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:41:54.196798 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:41:54.196839 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:41:54.202237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:41:54.202277 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:41:54.207108 systemd[1]: Stopping network-cleanup.service... Feb 12 19:41:54.210510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:41:54.210566 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:41:54.215069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:41:54.215120 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:41:54.219678 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:41:54.221617 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:41:54.228549 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:41:54.263257 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:41:54.265755 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:41:54.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.271038 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:41:54.271094 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:41:54.276165 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:41:54.278280 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:41:54.283541 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:41:54.285639 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:41:54.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.292231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:41:54.292276 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:41:54.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.296872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:41:54.296916 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:41:54.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.301839 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:41:54.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.311657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:41:54.311704 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:41:54.316548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:41:54.316624 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:41:54.351142 kernel: hv_netvsc 000d3a66-c56d-000d-3a66-c56d000d3a66 eth0: Data path switched from VF: enP34778s1 Feb 12 19:41:54.373335 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:41:54.376008 systemd[1]: Stopped network-cleanup.service. Feb 12 19:41:54.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:54.380584 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:41:54.386224 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:41:54.413548 systemd[1]: Switching root. Feb 12 19:41:54.418000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:41:54.418000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:41:54.418000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:41:54.418000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:41:54.418000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:41:54.439457 systemd-journald[183]: Journal stopped Feb 12 19:42:14.284291 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 12 19:42:14.284326 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:42:14.284337 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:42:14.284349 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:42:14.284357 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:42:14.284365 kernel: SELinux: policy capability open_perms=1 Feb 12 19:42:14.284376 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:42:14.284387 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:42:14.284395 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:42:14.284404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:42:14.284414 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:42:14.284426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:42:14.284435 systemd[1]: Successfully loaded SELinux policy in 278.850ms. Feb 12 19:42:14.284448 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.063ms. Feb 12 19:42:14.284464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:42:14.284474 systemd[1]: Detected virtualization microsoft. Feb 12 19:42:14.284486 systemd[1]: Detected architecture x86-64. Feb 12 19:42:14.284495 systemd[1]: Detected first boot. Feb 12 19:42:14.284506 systemd[1]: Hostname set to . Feb 12 19:42:14.284520 systemd[1]: Initializing machine ID from random generator. Feb 12 19:42:14.284529 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 12 19:42:14.284541 kernel: audit: type=1400 audit(1707766918.725:88): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:42:14.284553 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:42:14.284563 kernel: audit: type=1400 audit(1707766920.914:89): avc: denied { associate } for pid=1078 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:42:14.284575 kernel: audit: type=1300 audit(1707766920.914:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001076ac a1=c00002cb58 a2=c00002b440 a3=32 items=0 ppid=1061 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:14.284587 kernel: audit: type=1327 audit(1707766920.914:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:42:14.284597 kernel: audit: type=1400 audit(1707766920.923:90): avc: denied { associate } for pid=1078 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:42:14.284609 kernel: audit: type=1300 audit(1707766920.923:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107785 a2=1ed a3=0 items=2 ppid=1061 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:14.284618 kernel: audit: type=1307 audit(1707766920.923:90): cwd="/" Feb 12 19:42:14.284629 kernel: audit: type=1302 audit(1707766920.923:90): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:14.284642 kernel: audit: type=1302 audit(1707766920.923:90): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:14.284652 kernel: audit: type=1327 audit(1707766920.923:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:42:14.284661 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:42:14.284673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:42:14.284683 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:42:14.284696 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:42:14.284705 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:42:14.284719 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:42:14.284732 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:42:14.284742 systemd[1]: Created slice system-getty.slice. Feb 12 19:42:14.284757 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:42:14.284768 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:42:14.284780 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:42:14.284790 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:42:14.284802 systemd[1]: Created slice user.slice. Feb 12 19:42:14.284816 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:42:14.284826 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:42:14.284837 systemd[1]: Set up automount boot.automount. Feb 12 19:42:14.284848 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:42:14.284861 systemd[1]: Reached target integritysetup.target. Feb 12 19:42:14.284870 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:42:14.284881 systemd[1]: Reached target remote-fs.target. Feb 12 19:42:14.284892 systemd[1]: Reached target slices.target. Feb 12 19:42:14.284904 systemd[1]: Reached target swap.target. Feb 12 19:42:14.284916 systemd[1]: Reached target torcx.target. Feb 12 19:42:14.284929 systemd[1]: Reached target veritysetup.target. Feb 12 19:42:14.284939 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:42:14.284950 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:42:14.284960 kernel: audit: type=1400 audit(1707766933.970:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:42:14.284972 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:42:14.284984 kernel: audit: type=1335 audit(1707766933.970:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:42:14.284997 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:42:14.285008 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:42:14.285019 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:42:14.285031 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:42:14.285041 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:42:14.285054 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:42:14.285068 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:42:14.285079 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:42:14.285089 systemd[1]: Mounting media.mount... Feb 12 19:42:14.285101 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:42:14.285114 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:42:14.285142 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:42:14.285156 systemd[1]: Mounting tmp.mount... Feb 12 19:42:14.285166 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:42:14.285180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:42:14.285191 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:42:14.285203 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:42:14.285213 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:42:14.285226 systemd[1]: Starting modprobe@drm.service... Feb 12 19:42:14.285237 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:42:14.285248 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:42:14.285260 systemd[1]: Starting modprobe@loop.service... Feb 12 19:42:14.285272 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:42:14.285286 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:42:14.285297 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:42:14.285309 systemd[1]: Starting systemd-journald.service... Feb 12 19:42:14.285320 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:42:14.285332 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:42:14.285341 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:42:14.285354 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:42:14.285365 kernel: fuse: init (API version 7.34) Feb 12 19:42:14.285378 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:42:14.285388 kernel: loop: module loaded Feb 12 19:42:14.285399 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:42:14.285412 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:42:14.285422 kernel: audit: type=1305 audit(1707766934.277:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:42:14.285440 systemd-journald[1187]: Journal started Feb 12 19:42:14.285493 systemd-journald[1187]: Runtime Journal (/run/log/journal/7f4dcf09e075483eae86aef6f9e98134) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:42:13.970000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:42:13.970000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:42:14.277000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:42:14.295728 systemd[1]: Started systemd-journald.service. Feb 12 19:42:14.315970 kernel: audit: type=1300 audit(1707766934.277:93): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe7fecb200 a2=4000 a3=7ffe7fecb29c items=0 ppid=1 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:14.277000 audit[1187]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe7fecb200 a2=4000 a3=7ffe7fecb29c items=0 ppid=1 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:14.317887 systemd[1]: Mounted media.mount. Feb 12 19:42:14.277000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:42:14.325465 kernel: audit: type=1327 audit(1707766934.277:93): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:42:14.338651 kernel: audit: type=1130 audit(1707766934.316:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.339059 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:42:14.341484 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:42:14.343858 systemd[1]: Mounted tmp.mount. Feb 12 19:42:14.345880 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:42:14.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.348396 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:42:14.364196 kernel: audit: type=1130 audit(1707766934.345:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.364689 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:42:14.364846 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:42:14.378142 kernel: audit: type=1130 audit(1707766934.364:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.379916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:42:14.380075 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:42:14.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.405341 kernel: audit: type=1130 audit(1707766934.379:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.405380 kernel: audit: type=1131 audit(1707766934.379:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.408185 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:42:14.408379 systemd[1]: Finished modprobe@drm.service. Feb 12 19:42:14.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.410957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:42:14.411132 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:42:14.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.413802 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:42:14.413959 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:42:14.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.416379 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:42:14.416537 systemd[1]: Finished modprobe@loop.service. Feb 12 19:42:14.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.419116 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:42:14.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.423581 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:42:14.426320 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:42:14.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.429301 systemd[1]: Reached target network-pre.target. Feb 12 19:42:14.433220 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:42:14.441246 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:42:14.444298 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:42:14.448685 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:42:14.452312 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:42:14.456305 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:42:14.458732 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:42:14.461492 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:42:14.463102 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:42:14.466903 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:42:14.474280 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:42:14.476761 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:42:14.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.504372 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:42:14.508327 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:42:14.511159 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:42:14.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.515999 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:42:14.521998 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:42:14.524096 systemd-journald[1187]: Time spent on flushing to /var/log/journal/7f4dcf09e075483eae86aef6f9e98134 is 21.744ms for 1137 entries. Feb 12 19:42:14.524096 systemd-journald[1187]: System Journal (/var/log/journal/7f4dcf09e075483eae86aef6f9e98134) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:42:14.608528 systemd-journald[1187]: Received client request to flush runtime journal. Feb 12 19:42:14.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.609621 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:42:14.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:14.616629 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:42:15.201969 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:42:15.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:15.206169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:42:15.600000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:42:15.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:15.891409 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:42:15.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:15.895779 systemd[1]: Starting systemd-udevd.service... Feb 12 19:42:15.915587 systemd-udevd[1240]: Using default interface naming scheme 'v252'. Feb 12 19:42:16.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:16.575197 systemd[1]: Started systemd-udevd.service. Feb 12 19:42:16.580286 systemd[1]: Starting systemd-networkd.service... Feb 12 19:42:16.618005 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:42:16.690300 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:42:16.713151 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:42:16.740558 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:42:16.740622 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:42:16.708000 audit[1244]: AVC avc: denied { confidentiality } for pid=1244 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:42:16.748141 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:42:16.764145 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:42:16.764203 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:42:16.764228 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:42:16.768191 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:42:16.770152 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:42:16.771143 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:42:16.778389 systemd[1]: Started systemd-userdbd.service. Feb 12 19:42:16.782735 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:42:16.782779 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:42:16.787452 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:42:16.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:16.872829 systemd-journald[1187]: Time jumped backwards, rotating. Feb 12 19:42:16.708000 audit[1244]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=565469d47eb0 a1=f884 a2=7f2b9ff74bc5 a3=5 items=12 ppid=1240 pid=1244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:16.708000 audit: CWD cwd="/" Feb 12 19:42:16.708000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=1 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=2 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=3 name=(null) inode=15727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=4 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=5 name=(null) inode=15728 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=6 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=7 name=(null) inode=15729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=8 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=9 name=(null) inode=15730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=10 name=(null) inode=15726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PATH item=11 name=(null) inode=15731 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:16.708000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:42:17.290354 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1249) Feb 12 19:42:17.372311 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 19:42:17.406350 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 12 19:42:17.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.438882 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:42:17.443943 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:42:17.501804 systemd-networkd[1255]: lo: Link UP Feb 12 19:42:17.501815 systemd-networkd[1255]: lo: Gained carrier Feb 12 19:42:17.502403 systemd-networkd[1255]: Enumeration completed Feb 12 19:42:17.502563 systemd[1]: Started systemd-networkd.service. Feb 12 19:42:17.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.506875 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:42:17.518727 systemd-networkd[1255]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:42:17.573362 kernel: mlx5_core 87da:00:02.0 enP34778s1: Link up Feb 12 19:42:17.611354 kernel: hv_netvsc 000d3a66-c56d-000d-3a66-c56d000d3a66 eth0: Data path switched to VF: enP34778s1 Feb 12 19:42:17.612725 systemd-networkd[1255]: enP34778s1: Link UP Feb 12 19:42:17.613054 systemd-networkd[1255]: eth0: Link UP Feb 12 19:42:17.613153 systemd-networkd[1255]: eth0: Gained carrier Feb 12 19:42:17.618606 systemd-networkd[1255]: enP34778s1: Gained carrier Feb 12 19:42:17.654448 systemd-networkd[1255]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:42:17.858092 lvm[1320]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:42:17.888532 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:42:17.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.891266 systemd[1]: Reached target cryptsetup.target. Feb 12 19:42:17.895208 systemd[1]: Starting lvm2-activation.service... Feb 12 19:42:17.901779 lvm[1323]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:42:17.925251 systemd[1]: Finished lvm2-activation.service. Feb 12 19:42:17.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.928026 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:42:17.930548 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:42:17.930582 systemd[1]: Reached target local-fs.target. Feb 12 19:42:17.932973 systemd[1]: Reached target machines.target. Feb 12 19:42:17.936879 systemd[1]: Starting ldconfig.service... Feb 12 19:42:17.959529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:42:17.959598 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:17.960792 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:42:17.964145 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:42:17.968495 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:42:17.970957 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:42:17.971143 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:42:17.972738 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:42:17.988644 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:42:18.003678 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:42:18.319120 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:42:18.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:18.532667 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1326 (bootctl) Feb 12 19:42:18.534356 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:42:18.550343 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:42:19.168656 systemd-networkd[1255]: eth0: Gained IPv6LL Feb 12 19:42:19.191794 kernel: kauditd_printk_skb: 43 callbacks suppressed Feb 12 19:42:19.191874 kernel: audit: type=1130 audit(1707766939.173:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:19.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:19.174320 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:42:24.032222 systemd-fsck[1336]: fsck.fat 4.2 (2021-01-31) Feb 12 19:42:24.032222 systemd-fsck[1336]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 19:42:24.034761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:42:24.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.040088 systemd[1]: Mounting boot.mount... Feb 12 19:42:24.054363 kernel: audit: type=1130 audit(1707766944.037:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.269061 systemd[1]: Mounted boot.mount. Feb 12 19:42:24.288064 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:42:24.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.302367 kernel: audit: type=1130 audit(1707766944.289:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.619963 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:42:24.621072 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:42:24.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.636368 kernel: audit: type=1130 audit(1707766944.623:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.855739 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:42:24.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.859833 systemd[1]: Starting audit-rules.service... Feb 12 19:42:24.873919 kernel: audit: type=1130 audit(1707766944.857:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.875169 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:42:24.878993 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:42:24.885654 systemd[1]: Starting systemd-resolved.service... Feb 12 19:42:24.889795 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:42:24.893969 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:42:24.897060 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:42:24.900222 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:42:24.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.915053 kernel: audit: type=1130 audit(1707766944.898:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.918000 audit[1356]: SYSTEM_BOOT pid=1356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.921359 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:42:24.936446 kernel: audit: type=1127 audit(1707766944.918:133): pid=1356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:24.951808 kernel: audit: type=1130 audit(1707766944.936:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.192273 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:42:25.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.195117 systemd[1]: Reached target time-set.target. Feb 12 19:42:25.208355 kernel: audit: type=1130 audit(1707766945.193:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.381509 systemd-resolved[1354]: Positive Trust Anchors: Feb 12 19:42:25.381523 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:42:25.381560 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:42:25.544510 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:42:25.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.562576 kernel: audit: type=1130 audit(1707766945.546:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.593525 systemd-timesyncd[1355]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Feb 12 19:42:25.593593 systemd-timesyncd[1355]: Initial clock synchronization to Mon 2024-02-12 19:42:25.593958 UTC. Feb 12 19:42:25.982693 systemd-resolved[1354]: Using system hostname 'ci-3510.3.2-a-c8dbf10a06'. Feb 12 19:42:25.984426 systemd[1]: Started systemd-resolved.service. Feb 12 19:42:25.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:25.986910 systemd[1]: Reached target network.target. Feb 12 19:42:26.003378 kernel: audit: type=1130 audit(1707766945.986:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.003419 systemd[1]: Reached target network-online.target. Feb 12 19:42:26.006143 systemd[1]: Reached target nss-lookup.target. Feb 12 19:42:26.127000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:42:26.130035 systemd[1]: Finished audit-rules.service. Feb 12 19:42:26.136609 augenrules[1373]: No rules Feb 12 19:42:26.127000 audit[1373]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffbc6b2340 a2=420 a3=0 items=0 ppid=1349 pid=1373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:26.127000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:42:26.141609 kernel: audit: type=1305 audit(1707766946.127:138): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:42:33.675468 ldconfig[1325]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:42:33.684517 systemd[1]: Finished ldconfig.service. Feb 12 19:42:33.689077 systemd[1]: Starting systemd-update-done.service... Feb 12 19:42:33.708958 systemd[1]: Finished systemd-update-done.service. Feb 12 19:42:33.711996 systemd[1]: Reached target sysinit.target. Feb 12 19:42:33.714262 systemd[1]: Started motdgen.path. Feb 12 19:42:33.716380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:42:33.719559 systemd[1]: Started logrotate.timer. Feb 12 19:42:33.721645 systemd[1]: Started mdadm.timer. Feb 12 19:42:33.723613 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:42:33.725965 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:42:33.726108 systemd[1]: Reached target paths.target. Feb 12 19:42:33.728127 systemd[1]: Reached target timers.target. Feb 12 19:42:33.730677 systemd[1]: Listening on dbus.socket. Feb 12 19:42:33.733963 systemd[1]: Starting docker.socket... Feb 12 19:42:33.750464 systemd[1]: Listening on sshd.socket. Feb 12 19:42:33.752692 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:33.753134 systemd[1]: Listening on docker.socket. Feb 12 19:42:33.755514 systemd[1]: Reached target sockets.target. Feb 12 19:42:33.757779 systemd[1]: Reached target basic.target. Feb 12 19:42:33.760039 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:42:33.760096 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:42:33.760126 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:42:33.761147 systemd[1]: Starting containerd.service... Feb 12 19:42:33.764586 systemd[1]: Starting dbus.service... Feb 12 19:42:33.768199 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:42:33.771753 systemd[1]: Starting extend-filesystems.service... Feb 12 19:42:33.773856 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:42:33.775327 systemd[1]: Starting motdgen.service... Feb 12 19:42:33.778671 systemd[1]: Started nvidia.service. Feb 12 19:42:33.782139 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:42:33.787635 systemd[1]: Starting prepare-critools.service... Feb 12 19:42:33.792699 systemd[1]: Starting prepare-helm.service... Feb 12 19:42:33.796215 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:42:33.800046 systemd[1]: Starting sshd-keygen.service... Feb 12 19:42:33.807440 systemd[1]: Starting systemd-logind.service... Feb 12 19:42:33.812426 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:33.812525 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:42:33.814068 systemd[1]: Starting update-engine.service... Feb 12 19:42:33.819849 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:42:33.829686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:42:33.829971 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:42:33.849418 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:42:33.849708 systemd[1]: Finished motdgen.service. Feb 12 19:42:33.877973 jq[1387]: false Feb 12 19:42:33.878652 jq[1409]: true Feb 12 19:42:33.880106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:42:33.880457 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:42:33.907030 jq[1432]: true Feb 12 19:42:33.908379 env[1420]: time="2024-02-12T19:42:33.908315313Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:42:33.944761 extend-filesystems[1388]: Found sda Feb 12 19:42:33.947683 extend-filesystems[1388]: Found sda1 Feb 12 19:42:33.947683 extend-filesystems[1388]: Found sda2 Feb 12 19:42:33.947683 extend-filesystems[1388]: Found sda3 Feb 12 19:42:33.947683 extend-filesystems[1388]: Found usr Feb 12 19:42:33.947683 extend-filesystems[1388]: Found sda4 Feb 12 19:42:33.947683 extend-filesystems[1388]: Found sda6 Feb 12 19:42:33.982807 extend-filesystems[1388]: Found sda7 Feb 12 19:42:33.982807 extend-filesystems[1388]: Found sda9 Feb 12 19:42:33.982807 extend-filesystems[1388]: Checking size of /dev/sda9 Feb 12 19:42:34.009385 tar[1413]: crictl Feb 12 19:42:33.952470 systemd-logind[1403]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:42:34.009901 tar[1412]: ./ Feb 12 19:42:34.009901 tar[1412]: ./macvlan Feb 12 19:42:34.010142 tar[1414]: linux-amd64/helm Feb 12 19:42:33.958586 systemd-logind[1403]: New seat seat0. Feb 12 19:42:34.038341 env[1420]: time="2024-02-12T19:42:34.038283933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:42:34.040400 env[1420]: time="2024-02-12T19:42:34.040375273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.043652 extend-filesystems[1388]: Old size kept for /dev/sda9 Feb 12 19:42:34.051447 extend-filesystems[1388]: Found sr0 Feb 12 19:42:34.044137 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:42:34.044388 systemd[1]: Finished extend-filesystems.service. Feb 12 19:42:34.058168 env[1420]: time="2024-02-12T19:42:34.058121113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:34.059368 env[1420]: time="2024-02-12T19:42:34.058168414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.062223 env[1420]: time="2024-02-12T19:42:34.062187391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:34.062300 env[1420]: time="2024-02-12T19:42:34.062225492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.062300 env[1420]: time="2024-02-12T19:42:34.062252293Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:42:34.062300 env[1420]: time="2024-02-12T19:42:34.062268393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.062448 env[1420]: time="2024-02-12T19:42:34.062427396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.062745 env[1420]: time="2024-02-12T19:42:34.062720602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:34.063004 env[1420]: time="2024-02-12T19:42:34.062977607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:34.063057 env[1420]: time="2024-02-12T19:42:34.063006907Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:42:34.063098 env[1420]: time="2024-02-12T19:42:34.063075908Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:42:34.063098 env[1420]: time="2024-02-12T19:42:34.063093209Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:42:34.070538 dbus-daemon[1385]: [system] SELinux support is enabled Feb 12 19:42:34.084885 dbus-daemon[1385]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:42:34.070730 systemd[1]: Started dbus.service. Feb 12 19:42:34.076387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:42:34.076416 systemd[1]: Reached target system-config.target. Feb 12 19:42:34.079360 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:42:34.079385 systemd[1]: Reached target user-config.target. Feb 12 19:42:34.083429 systemd[1]: Started systemd-logind.service. Feb 12 19:42:34.091950 bash[1454]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:42:34.088594 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:42:34.099092 env[1420]: time="2024-02-12T19:42:34.099059399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:42:34.099172 env[1420]: time="2024-02-12T19:42:34.099129301Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:42:34.099172 env[1420]: time="2024-02-12T19:42:34.099150801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:42:34.099249 env[1420]: time="2024-02-12T19:42:34.099222602Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.099288 env[1420]: time="2024-02-12T19:42:34.099248103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099325404Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099365705Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099392006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099410206Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099442407Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099459407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099475307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099607910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.099743912Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.100352824Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.100408225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.100447126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.100508727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.100874 env[1420]: time="2024-02-12T19:42:34.100526927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100543328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100559028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100585129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100603729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100618429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100634229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100660030Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100853934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100876834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100907735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100925035Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100945335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.100960536Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.101407 env[1420]: time="2024-02-12T19:42:34.101011537Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:42:34.101877 env[1420]: time="2024-02-12T19:42:34.101067738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.101917 env[1420]: time="2024-02-12T19:42:34.101392144Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:42:34.101917 env[1420]: time="2024-02-12T19:42:34.101479946Z" level=info msg="Connect containerd service" Feb 12 19:42:34.101917 env[1420]: time="2024-02-12T19:42:34.101532647Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.105328420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106444341Z" level=info msg="Start subscribing containerd event" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106596244Z" level=info msg="Start recovering state" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106676945Z" level=info msg="Start event monitor" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106708846Z" level=info msg="Start snapshots syncer" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106722346Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.106737647Z" level=info msg="Start streaming server" Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.107416060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.107480161Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:42:34.133946 env[1420]: time="2024-02-12T19:42:34.111542139Z" level=info msg="containerd successfully booted in 0.205490s" Feb 12 19:42:34.107609 systemd[1]: Started containerd.service. Feb 12 19:42:34.114218 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:42:34.157312 tar[1412]: ./static Feb 12 19:42:34.252498 tar[1412]: ./vlan Feb 12 19:42:34.362876 tar[1412]: ./portmap Feb 12 19:42:34.443656 tar[1412]: ./host-local Feb 12 19:42:34.513215 tar[1412]: ./vrf Feb 12 19:42:34.594358 tar[1412]: ./bridge Feb 12 19:42:34.678870 tar[1412]: ./tuning Feb 12 19:42:34.734552 update_engine[1408]: I0212 19:42:34.719036 1408 main.cc:92] Flatcar Update Engine starting Feb 12 19:42:34.754648 tar[1412]: ./firewall Feb 12 19:42:34.781690 systemd[1]: Started update-engine.service. Feb 12 19:42:34.789177 systemd[1]: Started locksmithd.service. Feb 12 19:42:34.793051 update_engine[1408]: I0212 19:42:34.792665 1408 update_check_scheduler.cc:74] Next update check in 7m42s Feb 12 19:42:34.847622 tar[1412]: ./host-device Feb 12 19:42:34.943704 tar[1412]: ./sbr Feb 12 19:42:35.010470 tar[1412]: ./loopback Feb 12 19:42:35.078683 tar[1412]: ./dhcp Feb 12 19:42:35.087115 tar[1414]: linux-amd64/LICENSE Feb 12 19:42:35.087507 tar[1414]: linux-amd64/README.md Feb 12 19:42:35.101705 systemd[1]: Finished prepare-helm.service. Feb 12 19:42:35.141396 systemd[1]: Finished prepare-critools.service. Feb 12 19:42:35.228492 tar[1412]: ./ptp Feb 12 19:42:35.272664 tar[1412]: ./ipvlan Feb 12 19:42:35.314067 tar[1412]: ./bandwidth Feb 12 19:42:35.414016 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:42:35.515282 sshd_keygen[1410]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:42:35.535064 systemd[1]: Finished sshd-keygen.service. Feb 12 19:42:35.539450 systemd[1]: Starting issuegen.service... Feb 12 19:42:35.544603 systemd[1]: Started waagent.service. Feb 12 19:42:35.551841 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:42:35.552114 systemd[1]: Finished issuegen.service. Feb 12 19:42:35.556121 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:42:35.582819 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:42:35.587437 systemd[1]: Started getty@tty1.service. Feb 12 19:42:35.591216 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:42:35.593999 systemd[1]: Reached target getty.target. Feb 12 19:42:35.596201 systemd[1]: Reached target multi-user.target. Feb 12 19:42:35.599822 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:42:35.610216 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:42:35.610479 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:42:35.613585 systemd[1]: Startup finished in 865ms (firmware) + 27.363s (loader) + 1min 12.674s (kernel) + 38.172s (userspace) = 2min 19.075s. Feb 12 19:42:36.064907 login[1536]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:42:36.065438 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:42:36.119778 systemd[1]: Created slice user-500.slice. Feb 12 19:42:36.122245 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:42:36.126014 systemd-logind[1403]: New session 1 of user core. Feb 12 19:42:36.133398 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:42:36.134917 systemd[1]: Starting user@500.service... Feb 12 19:42:36.154835 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:42:36.343234 systemd[1546]: Queued start job for default target default.target. Feb 12 19:42:36.343544 systemd[1546]: Reached target paths.target. Feb 12 19:42:36.343569 systemd[1546]: Reached target sockets.target. Feb 12 19:42:36.343589 systemd[1546]: Reached target timers.target. Feb 12 19:42:36.343607 systemd[1546]: Reached target basic.target. Feb 12 19:42:36.343760 systemd[1]: Started user@500.service. Feb 12 19:42:36.345031 systemd[1]: Started session-1.scope. Feb 12 19:42:36.345318 systemd[1546]: Reached target default.target. Feb 12 19:42:36.345568 systemd[1546]: Startup finished in 185ms. Feb 12 19:42:36.434854 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:42:37.067457 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:42:37.073481 systemd[1]: Started session-2.scope. Feb 12 19:42:37.074415 systemd-logind[1403]: New session 2 of user core. Feb 12 19:42:41.307603 waagent[1528]: 2024-02-12T19:42:41.307488Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:42:41.312202 waagent[1528]: 2024-02-12T19:42:41.312129Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:42:41.315275 waagent[1528]: 2024-02-12T19:42:41.315213Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:42:41.318367 waagent[1528]: 2024-02-12T19:42:41.318285Z INFO Daemon Daemon Run daemon Feb 12 19:42:41.321052 waagent[1528]: 2024-02-12T19:42:41.320993Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:42:41.333931 waagent[1528]: 2024-02-12T19:42:41.333820Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:42:41.341403 waagent[1528]: 2024-02-12T19:42:41.341287Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.341686Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.342686Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.344360Z INFO Daemon Daemon Activate resource disk Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.345291Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.352866Z INFO Daemon Daemon Found device: None Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.353547Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.354461Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.356364Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:42:41.374648 waagent[1528]: 2024-02-12T19:42:41.357499Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:42:41.377031 waagent[1528]: 2024-02-12T19:42:41.376913Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:42:41.392832 waagent[1528]: 2024-02-12T19:42:41.379236Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:42:41.392832 waagent[1528]: 2024-02-12T19:42:41.380600Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:42:41.392832 waagent[1528]: 2024-02-12T19:42:41.381515Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:42:41.455652 waagent[1528]: 2024-02-12T19:42:41.455492Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:42:41.537744 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:42:41.557817 waagent[1528]: 2024-02-12T19:42:41.557628Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:42:41.561463 waagent[1528]: 2024-02-12T19:42:41.561381Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:42:41.565132 waagent[1528]: 2024-02-12T19:42:41.565065Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:42:41.568707 waagent[1528]: 2024-02-12T19:42:41.568644Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:42:41.572824 waagent[1528]: 2024-02-12T19:42:41.572760Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:42:41.575658 waagent[1528]: 2024-02-12T19:42:41.575596Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:42:41.686147 waagent[1528]: 2024-02-12T19:42:41.686082Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:42:41.694587 waagent[1528]: 2024-02-12T19:42:41.687020Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:42:41.694587 waagent[1528]: 2024-02-12T19:42:41.688096Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:42:42.115924 waagent[1528]: 2024-02-12T19:42:42.115780Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:42:42.126570 waagent[1528]: 2024-02-12T19:42:42.126495Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:42:42.132095 waagent[1528]: 2024-02-12T19:42:42.126883Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:42:42.202810 waagent[1528]: 2024-02-12T19:42:42.202695Z INFO Daemon Daemon Found private key matching thumbprint 7B15B499A7D51388C87A7842FC5BB0BA49F41717 Feb 12 19:42:42.209020 waagent[1528]: 2024-02-12T19:42:42.208953Z INFO Daemon Daemon Certificate with thumbprint 6DA4298D41BFED0C2C14873E930D619312F7DAB3 has no matching private key. Feb 12 19:42:42.214322 waagent[1528]: 2024-02-12T19:42:42.214262Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:42:42.260001 waagent[1528]: 2024-02-12T19:42:42.259926Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 72b61bef-a21b-4c60-a93e-4722190c5773 New eTag: 549548178217216908] Feb 12 19:42:42.265919 waagent[1528]: 2024-02-12T19:42:42.265853Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:42:42.278367 waagent[1528]: 2024-02-12T19:42:42.278297Z INFO Daemon Daemon Starting provisioning Feb 12 19:42:42.281237 waagent[1528]: 2024-02-12T19:42:42.281179Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:42:42.283918 waagent[1528]: 2024-02-12T19:42:42.283863Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-c8dbf10a06] Feb 12 19:42:42.289872 waagent[1528]: 2024-02-12T19:42:42.289779Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-c8dbf10a06] Feb 12 19:42:42.293603 waagent[1528]: 2024-02-12T19:42:42.293542Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:42:42.297315 waagent[1528]: 2024-02-12T19:42:42.297254Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:42:42.311198 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:42:42.311536 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:42:42.311605 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:42:42.311914 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:42:42.316390 systemd-networkd[1255]: eth0: DHCPv6 lease lost Feb 12 19:42:42.317787 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:42:42.318119 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:42:42.321220 systemd[1]: Starting systemd-networkd.service... Feb 12 19:42:42.356457 systemd-networkd[1592]: enP34778s1: Link UP Feb 12 19:42:42.356467 systemd-networkd[1592]: enP34778s1: Gained carrier Feb 12 19:42:42.357907 systemd-networkd[1592]: eth0: Link UP Feb 12 19:42:42.357917 systemd-networkd[1592]: eth0: Gained carrier Feb 12 19:42:42.358367 systemd-networkd[1592]: lo: Link UP Feb 12 19:42:42.358374 systemd-networkd[1592]: lo: Gained carrier Feb 12 19:42:42.358676 systemd-networkd[1592]: eth0: Gained IPv6LL Feb 12 19:42:42.358941 systemd-networkd[1592]: Enumeration completed Feb 12 19:42:42.359057 systemd[1]: Started systemd-networkd.service. Feb 12 19:42:42.361387 waagent[1528]: 2024-02-12T19:42:42.360231Z INFO Daemon Daemon Create user account if not exists Feb 12 19:42:42.362314 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:42:42.367706 systemd-networkd[1592]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:42:42.368841 waagent[1528]: 2024-02-12T19:42:42.368760Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:42:42.372105 waagent[1528]: 2024-02-12T19:42:42.372036Z INFO Daemon Daemon Configure sudoer Feb 12 19:42:42.374993 waagent[1528]: 2024-02-12T19:42:42.374933Z INFO Daemon Daemon Configure sshd Feb 12 19:42:42.378105 waagent[1528]: 2024-02-12T19:42:42.377674Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:42:42.406411 systemd-networkd[1592]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:42:42.408895 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:42:43.617037 waagent[1528]: 2024-02-12T19:42:43.616941Z INFO Daemon Daemon Provisioning complete Feb 12 19:42:43.636223 waagent[1528]: 2024-02-12T19:42:43.636146Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:42:43.639729 waagent[1528]: 2024-02-12T19:42:43.639662Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:42:43.645607 waagent[1528]: 2024-02-12T19:42:43.645542Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:42:43.909030 waagent[1602]: 2024-02-12T19:42:43.908871Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:42:43.909755 waagent[1602]: 2024-02-12T19:42:43.909693Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:43.909896 waagent[1602]: 2024-02-12T19:42:43.909844Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:43.920695 waagent[1602]: 2024-02-12T19:42:43.920624Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:42:43.920850 waagent[1602]: 2024-02-12T19:42:43.920801Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:42:43.980265 waagent[1602]: 2024-02-12T19:42:43.980150Z INFO ExtHandler ExtHandler Found private key matching thumbprint 7B15B499A7D51388C87A7842FC5BB0BA49F41717 Feb 12 19:42:43.980489 waagent[1602]: 2024-02-12T19:42:43.980430Z INFO ExtHandler ExtHandler Certificate with thumbprint 6DA4298D41BFED0C2C14873E930D619312F7DAB3 has no matching private key. Feb 12 19:42:43.980712 waagent[1602]: 2024-02-12T19:42:43.980662Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:42:43.996998 waagent[1602]: 2024-02-12T19:42:43.996938Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 11f07549-ca32-4d98-916d-cd44d243b905 New eTag: 549548178217216908] Feb 12 19:42:43.997556 waagent[1602]: 2024-02-12T19:42:43.997501Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:42:44.041535 waagent[1602]: 2024-02-12T19:42:44.041408Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:42:44.051492 waagent[1602]: 2024-02-12T19:42:44.051411Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1602 Feb 12 19:42:44.060295 waagent[1602]: 2024-02-12T19:42:44.056723Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:42:44.060295 waagent[1602]: 2024-02-12T19:42:44.058281Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:42:44.112097 waagent[1602]: 2024-02-12T19:42:44.112033Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:42:44.112511 waagent[1602]: 2024-02-12T19:42:44.112449Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:42:44.120321 waagent[1602]: 2024-02-12T19:42:44.120266Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:42:44.120781 waagent[1602]: 2024-02-12T19:42:44.120725Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:42:44.121831 waagent[1602]: 2024-02-12T19:42:44.121768Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:42:44.123087 waagent[1602]: 2024-02-12T19:42:44.123029Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:42:44.123763 waagent[1602]: 2024-02-12T19:42:44.123707Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:42:44.123890 waagent[1602]: 2024-02-12T19:42:44.123813Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:44.124257 waagent[1602]: 2024-02-12T19:42:44.124207Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:44.124669 waagent[1602]: 2024-02-12T19:42:44.124609Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:42:44.125245 waagent[1602]: 2024-02-12T19:42:44.125189Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:42:44.125409 waagent[1602]: 2024-02-12T19:42:44.125312Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:44.125745 waagent[1602]: 2024-02-12T19:42:44.125694Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:42:44.125917 waagent[1602]: 2024-02-12T19:42:44.125852Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:44.126563 waagent[1602]: 2024-02-12T19:42:44.126507Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:42:44.126795 waagent[1602]: 2024-02-12T19:42:44.126724Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:42:44.126795 waagent[1602]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:42:44.126795 waagent[1602]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:42:44.126795 waagent[1602]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:42:44.126795 waagent[1602]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.126795 waagent[1602]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.126795 waagent[1602]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.127208 waagent[1602]: 2024-02-12T19:42:44.127159Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:42:44.127664 waagent[1602]: 2024-02-12T19:42:44.127614Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:42:44.131992 waagent[1602]: 2024-02-12T19:42:44.131921Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:42:44.132232 waagent[1602]: 2024-02-12T19:42:44.132182Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:42:44.132890 waagent[1602]: 2024-02-12T19:42:44.132836Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:42:44.145772 waagent[1602]: 2024-02-12T19:42:44.145725Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:42:44.146649 waagent[1602]: 2024-02-12T19:42:44.146604Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:42:44.147687 waagent[1602]: 2024-02-12T19:42:44.147638Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:42:44.197740 waagent[1602]: 2024-02-12T19:42:44.197659Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:42:44.208319 waagent[1602]: 2024-02-12T19:42:44.208261Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1592' Feb 12 19:42:44.318869 waagent[1602]: 2024-02-12T19:42:44.318747Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:42:44.318869 waagent[1602]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:42:44.318869 waagent[1602]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:42:44.318869 waagent[1602]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:c5:6d brd ff:ff:ff:ff:ff:ff Feb 12 19:42:44.318869 waagent[1602]: 3: enP34778s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:c5:6d brd ff:ff:ff:ff:ff:ff\ altname enP34778p0s2 Feb 12 19:42:44.318869 waagent[1602]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:42:44.318869 waagent[1602]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:42:44.318869 waagent[1602]: 2: eth0 inet 10.200.8.35/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:42:44.318869 waagent[1602]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:42:44.318869 waagent[1602]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:42:44.318869 waagent[1602]: 2: eth0 inet6 fe80::20d:3aff:fe66:c56d/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:42:44.526495 waagent[1602]: 2024-02-12T19:42:44.526165Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 12 19:42:44.531243 waagent[1602]: 2024-02-12T19:42:44.531121Z INFO EnvHandler ExtHandler Firewall rules: Feb 12 19:42:44.531243 waagent[1602]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:44.531243 waagent[1602]: pkts bytes target prot opt in out source destination Feb 12 19:42:44.531243 waagent[1602]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:44.531243 waagent[1602]: pkts bytes target prot opt in out source destination Feb 12 19:42:44.531243 waagent[1602]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:44.531243 waagent[1602]: pkts bytes target prot opt in out source destination Feb 12 19:42:44.531243 waagent[1602]: 2 796 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:42:44.531243 waagent[1602]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:42:44.534814 waagent[1602]: 2024-02-12T19:42:44.534758Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:42:44.574978 waagent[1602]: 2024-02-12T19:42:44.574911Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:42:44.649058 waagent[1528]: 2024-02-12T19:42:44.648908Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:42:44.654389 waagent[1528]: 2024-02-12T19:42:44.654315Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:42:45.655677 waagent[1642]: 2024-02-12T19:42:45.655567Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:42:45.656364 waagent[1642]: 2024-02-12T19:42:45.656290Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:42:45.656522 waagent[1642]: 2024-02-12T19:42:45.656469Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:42:45.665935 waagent[1642]: 2024-02-12T19:42:45.665842Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:42:45.666301 waagent[1642]: 2024-02-12T19:42:45.666245Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:45.666484 waagent[1642]: 2024-02-12T19:42:45.666433Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:45.677803 waagent[1642]: 2024-02-12T19:42:45.677733Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:42:45.686460 waagent[1642]: 2024-02-12T19:42:45.686402Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:42:45.687320 waagent[1642]: 2024-02-12T19:42:45.687260Z INFO ExtHandler Feb 12 19:42:45.687489 waagent[1642]: 2024-02-12T19:42:45.687438Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c7e863f3-48ed-4fd7-97b2-7875552cc501 eTag: 549548178217216908 source: Fabric] Feb 12 19:42:45.688162 waagent[1642]: 2024-02-12T19:42:45.688105Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:42:45.689234 waagent[1642]: 2024-02-12T19:42:45.689173Z INFO ExtHandler Feb 12 19:42:45.689383 waagent[1642]: 2024-02-12T19:42:45.689317Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:42:45.695938 waagent[1642]: 2024-02-12T19:42:45.695886Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:42:45.696358 waagent[1642]: 2024-02-12T19:42:45.696301Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:42:45.723049 waagent[1642]: 2024-02-12T19:42:45.722989Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:42:45.788265 waagent[1642]: 2024-02-12T19:42:45.788147Z INFO ExtHandler Downloaded certificate {'thumbprint': '7B15B499A7D51388C87A7842FC5BB0BA49F41717', 'hasPrivateKey': True} Feb 12 19:42:45.789178 waagent[1642]: 2024-02-12T19:42:45.789119Z INFO ExtHandler Downloaded certificate {'thumbprint': '6DA4298D41BFED0C2C14873E930D619312F7DAB3', 'hasPrivateKey': False} Feb 12 19:42:45.790112 waagent[1642]: 2024-02-12T19:42:45.790052Z INFO ExtHandler Fetch goal state completed Feb 12 19:42:45.812580 waagent[1642]: 2024-02-12T19:42:45.812513Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1642 Feb 12 19:42:45.815784 waagent[1642]: 2024-02-12T19:42:45.815715Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:42:45.817144 waagent[1642]: 2024-02-12T19:42:45.817089Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:42:45.821629 waagent[1642]: 2024-02-12T19:42:45.821579Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:42:45.821978 waagent[1642]: 2024-02-12T19:42:45.821923Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:42:45.829681 waagent[1642]: 2024-02-12T19:42:45.829630Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:42:45.830105 waagent[1642]: 2024-02-12T19:42:45.830051Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:42:45.852518 waagent[1642]: 2024-02-12T19:42:45.852401Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 12 19:42:45.855805 waagent[1642]: 2024-02-12T19:42:45.855685Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 12 19:42:45.861655 waagent[1642]: 2024-02-12T19:42:45.861583Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:42:45.863095 waagent[1642]: 2024-02-12T19:42:45.863037Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:42:45.863386 waagent[1642]: 2024-02-12T19:42:45.863312Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:45.864233 waagent[1642]: 2024-02-12T19:42:45.864178Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:45.864769 waagent[1642]: 2024-02-12T19:42:45.864711Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:42:45.865043 waagent[1642]: 2024-02-12T19:42:45.864986Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:42:45.865043 waagent[1642]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:42:45.865043 waagent[1642]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:42:45.865043 waagent[1642]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:42:45.865043 waagent[1642]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:45.865043 waagent[1642]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:45.865043 waagent[1642]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:45.867192 waagent[1642]: 2024-02-12T19:42:45.867101Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:42:45.868138 waagent[1642]: 2024-02-12T19:42:45.868061Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:45.868459 waagent[1642]: 2024-02-12T19:42:45.868400Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:42:45.868657 waagent[1642]: 2024-02-12T19:42:45.868590Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:42:45.871521 waagent[1642]: 2024-02-12T19:42:45.871433Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:45.872160 waagent[1642]: 2024-02-12T19:42:45.872081Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:42:45.872953 waagent[1642]: 2024-02-12T19:42:45.872887Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:42:45.874820 waagent[1642]: 2024-02-12T19:42:45.874699Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:42:45.875244 waagent[1642]: 2024-02-12T19:42:45.875181Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:42:45.875487 waagent[1642]: 2024-02-12T19:42:45.875415Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:42:45.876614 waagent[1642]: 2024-02-12T19:42:45.876560Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:42:45.877782 waagent[1642]: 2024-02-12T19:42:45.877731Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:42:45.877782 waagent[1642]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:42:45.877782 waagent[1642]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:42:45.877782 waagent[1642]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:c5:6d brd ff:ff:ff:ff:ff:ff Feb 12 19:42:45.877782 waagent[1642]: 3: enP34778s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:c5:6d brd ff:ff:ff:ff:ff:ff\ altname enP34778p0s2 Feb 12 19:42:45.877782 waagent[1642]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:42:45.877782 waagent[1642]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:42:45.877782 waagent[1642]: 2: eth0 inet 10.200.8.35/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:42:45.877782 waagent[1642]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:42:45.877782 waagent[1642]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:42:45.877782 waagent[1642]: 2: eth0 inet6 fe80::20d:3aff:fe66:c56d/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:42:45.900370 waagent[1642]: 2024-02-12T19:42:45.900285Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:42:45.902139 waagent[1642]: 2024-02-12T19:42:45.902084Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:42:45.968269 waagent[1642]: 2024-02-12T19:42:45.968208Z INFO ExtHandler ExtHandler Feb 12 19:42:45.968829 waagent[1642]: 2024-02-12T19:42:45.968771Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:42:45.968829 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.968829 waagent[1642]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.968829 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.968829 waagent[1642]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.968829 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.968829 waagent[1642]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.968829 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:42:45.968829 waagent[1642]: 119 14582 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:42:45.968829 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:42:45.969225 waagent[1642]: 2024-02-12T19:42:45.968931Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3ec6f9ba-fa94-42de-bcdb-af0ceb2efa9a correlation 226c8730-c3c5-4de7-8f3b-496aa44d15fe created: 2024-02-12T19:40:04.937065Z] Feb 12 19:42:45.970389 waagent[1642]: 2024-02-12T19:42:45.970321Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:42:45.972096 waagent[1642]: 2024-02-12T19:42:45.972040Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 12 19:42:45.992187 waagent[1642]: 2024-02-12T19:42:45.992122Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:42:46.007859 waagent[1642]: 2024-02-12T19:42:46.007787Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 051B4963-0A86-441B-8F5E-0CAC7558370F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:43:04.917825 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 12 19:43:12.463837 systemd[1]: Created slice system-sshd.slice. Feb 12 19:43:12.465905 systemd[1]: Started sshd@0-10.200.8.35:22-10.200.12.6:33134.service. Feb 12 19:43:13.332119 sshd[1680]: Accepted publickey for core from 10.200.12.6 port 33134 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:13.333794 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:13.338397 systemd-logind[1403]: New session 3 of user core. Feb 12 19:43:13.339650 systemd[1]: Started session-3.scope. Feb 12 19:43:13.878222 systemd[1]: Started sshd@1-10.200.8.35:22-10.200.12.6:33144.service. Feb 12 19:43:14.515127 sshd[1685]: Accepted publickey for core from 10.200.12.6 port 33144 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:14.516757 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:14.522545 systemd[1]: Started session-4.scope. Feb 12 19:43:14.523485 systemd-logind[1403]: New session 4 of user core. Feb 12 19:43:14.965149 sshd[1685]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:14.968321 systemd[1]: sshd@1-10.200.8.35:22-10.200.12.6:33144.service: Deactivated successfully. Feb 12 19:43:14.970297 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:43:14.970832 systemd-logind[1403]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:43:14.972071 systemd-logind[1403]: Removed session 4. Feb 12 19:43:15.069042 systemd[1]: Started sshd@2-10.200.8.35:22-10.200.12.6:33156.service. Feb 12 19:43:15.760640 sshd[1692]: Accepted publickey for core from 10.200.12.6 port 33156 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:15.762212 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:15.767847 systemd[1]: Started session-5.scope. Feb 12 19:43:15.768090 systemd-logind[1403]: New session 5 of user core. Feb 12 19:43:16.208482 sshd[1692]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:16.211207 systemd[1]: sshd@2-10.200.8.35:22-10.200.12.6:33156.service: Deactivated successfully. Feb 12 19:43:16.214253 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:43:16.215615 systemd-logind[1403]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:43:16.216645 systemd-logind[1403]: Removed session 5. Feb 12 19:43:16.310160 systemd[1]: Started sshd@3-10.200.8.35:22-10.200.12.6:33164.service. Feb 12 19:43:16.927190 sshd[1702]: Accepted publickey for core from 10.200.12.6 port 33164 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:16.928810 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.933857 systemd[1]: Started session-6.scope. Feb 12 19:43:16.934108 systemd-logind[1403]: New session 6 of user core. Feb 12 19:43:17.368580 sshd[1702]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:17.371491 systemd[1]: sshd@3-10.200.8.35:22-10.200.12.6:33164.service: Deactivated successfully. Feb 12 19:43:17.372595 systemd-logind[1403]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:43:17.372680 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:43:17.374063 systemd-logind[1403]: Removed session 6. Feb 12 19:43:17.477997 systemd[1]: Started sshd@4-10.200.8.35:22-10.200.12.6:43474.service. Feb 12 19:43:18.103925 sshd[1712]: Accepted publickey for core from 10.200.12.6 port 43474 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:18.105517 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:18.111323 systemd[1]: Started session-7.scope. Feb 12 19:43:18.111651 systemd-logind[1403]: New session 7 of user core. Feb 12 19:43:18.775734 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 19:43:18.776074 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:18.803798 dbus-daemon[1385]: Н4\u001d-V: received setenforce notice (enforcing=327536512) Feb 12 19:43:18.805848 sudo[1716]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:18.921805 sshd[1712]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:18.925632 systemd[1]: sshd@4-10.200.8.35:22-10.200.12.6:43474.service: Deactivated successfully. Feb 12 19:43:18.927643 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:43:18.928372 systemd-logind[1403]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:43:18.929936 systemd-logind[1403]: Removed session 7. Feb 12 19:43:19.025549 systemd[1]: Started sshd@5-10.200.8.35:22-10.200.12.6:43484.service. Feb 12 19:43:19.647800 sshd[1720]: Accepted publickey for core from 10.200.12.6 port 43484 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:19.649475 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:19.655204 systemd[1]: Started session-8.scope. Feb 12 19:43:19.655480 systemd-logind[1403]: New session 8 of user core. Feb 12 19:43:19.767249 update_engine[1408]: I0212 19:43:19.767190 1408 update_attempter.cc:509] Updating boot flags... Feb 12 19:43:19.987320 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 19:43:19.987616 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:19.990230 sudo[1791]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:19.994520 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 19:43:19.994769 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:20.002905 systemd[1]: Stopping audit-rules.service... Feb 12 19:43:20.003000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:43:20.007693 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 12 19:43:20.007746 kernel: audit: type=1305 audit(1707767000.003:139): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:43:20.007907 auditctl[1794]: No rules Feb 12 19:43:20.008297 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 19:43:20.008525 systemd[1]: Stopped audit-rules.service. Feb 12 19:43:20.010036 systemd[1]: Starting audit-rules.service... Feb 12 19:43:20.003000 audit[1794]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff39896c10 a2=420 a3=0 items=0 ppid=1 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:20.031320 kernel: audit: type=1300 audit(1707767000.003:139): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff39896c10 a2=420 a3=0 items=0 ppid=1 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:20.003000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:43:20.035099 augenrules[1812]: No rules Feb 12 19:43:20.036433 kernel: audit: type=1327 audit(1707767000.003:139): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:43:20.035986 systemd[1]: Finished audit-rules.service. Feb 12 19:43:20.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.037712 sudo[1790]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:20.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.060176 kernel: audit: type=1131 audit(1707767000.007:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.060241 kernel: audit: type=1130 audit(1707767000.035:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.060374 kernel: audit: type=1106 audit(1707767000.037:142): pid=1790 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.037000 audit[1790]: USER_END pid=1790 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.037000 audit[1790]: CRED_DISP pid=1790 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.085932 kernel: audit: type=1104 audit(1707767000.037:143): pid=1790 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.139778 sshd[1720]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:20.140000 audit[1720]: USER_END pid=1720 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.143343 systemd[1]: sshd@5-10.200.8.35:22-10.200.12.6:43484.service: Deactivated successfully. Feb 12 19:43:20.144245 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:43:20.149848 systemd-logind[1403]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:43:20.150724 systemd-logind[1403]: Removed session 8. Feb 12 19:43:20.141000 audit[1720]: CRED_DISP pid=1720 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.168838 kernel: audit: type=1106 audit(1707767000.140:144): pid=1720 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.168912 kernel: audit: type=1104 audit(1707767000.141:145): pid=1720 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.168935 kernel: audit: type=1131 audit(1707767000.141:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.35:22-10.200.12.6:43484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.35:22-10.200.12.6:43484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.245304 systemd[1]: Started sshd@6-10.200.8.35:22-10.200.12.6:43500.service. Feb 12 19:43:20.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.35:22-10.200.12.6:43500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:20.866000 audit[1819]: USER_ACCT pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.866807 sshd[1819]: Accepted publickey for core from 10.200.12.6 port 43500 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:20.867000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.867000 audit[1819]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed62ee260 a2=3 a3=0 items=0 ppid=1 pid=1819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:20.867000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:43:20.868208 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:20.873693 systemd[1]: Started session-9.scope. Feb 12 19:43:20.874662 systemd-logind[1403]: New session 9 of user core. Feb 12 19:43:20.880000 audit[1819]: USER_START pid=1819 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:20.882000 audit[1822]: CRED_ACQ pid=1822 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:21.208000 audit[1823]: USER_ACCT pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:21.209375 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:43:21.209000 audit[1823]: CRED_REFR pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:21.209642 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:21.210000 audit[1823]: USER_START pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.076262 systemd[1]: Starting docker.service... Feb 12 19:43:22.127695 env[1838]: time="2024-02-12T19:43:22.127640091Z" level=info msg="Starting up" Feb 12 19:43:22.128977 env[1838]: time="2024-02-12T19:43:22.128955192Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:22.129083 env[1838]: time="2024-02-12T19:43:22.129071792Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:22.129156 env[1838]: time="2024-02-12T19:43:22.129145092Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:22.129195 env[1838]: time="2024-02-12T19:43:22.129187592Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:22.133939 env[1838]: time="2024-02-12T19:43:22.133922496Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:22.134026 env[1838]: time="2024-02-12T19:43:22.134016396Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:22.134075 env[1838]: time="2024-02-12T19:43:22.134065996Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:22.134118 env[1838]: time="2024-02-12T19:43:22.134110896Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:22.140473 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport411196857-merged.mount: Deactivated successfully. Feb 12 19:43:22.271351 env[1838]: time="2024-02-12T19:43:22.271312215Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 19:43:22.271351 env[1838]: time="2024-02-12T19:43:22.271346715Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 19:43:22.271607 env[1838]: time="2024-02-12T19:43:22.271558315Z" level=info msg="Loading containers: start." Feb 12 19:43:22.323000 audit[1865]: NETFILTER_CFG table=nat:6 family=2 entries=2 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.323000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd0571eb40 a2=0 a3=7ffd0571eb2c items=0 ppid=1838 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.323000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 12 19:43:22.325000 audit[1867]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.325000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd739fe0d0 a2=0 a3=7ffd739fe0bc items=0 ppid=1838 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.325000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 12 19:43:22.326000 audit[1869]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.326000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcd27442e0 a2=0 a3=7ffcd27442cc items=0 ppid=1838 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 19:43:22.328000 audit[1871]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.328000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffedfefd050 a2=0 a3=7ffedfefd03c items=0 ppid=1838 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.328000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 19:43:22.330000 audit[1873]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.330000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7c38bf00 a2=0 a3=7ffe7c38beec items=0 ppid=1838 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.330000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 12 19:43:22.332000 audit[1875]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_rule pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.332000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffecfbf2340 a2=0 a3=7ffecfbf232c items=0 ppid=1838 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 12 19:43:22.347000 audit[1877]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.347000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2ea46eb0 a2=0 a3=7ffe2ea46e9c items=0 ppid=1838 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 12 19:43:22.349000 audit[1879]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.349000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdf7873580 a2=0 a3=7ffdf787356c items=0 ppid=1838 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.349000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 12 19:43:22.351000 audit[1881]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.351000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffeb5a40fa0 a2=0 a3=7ffeb5a40f8c items=0 ppid=1838 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.351000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:43:22.365000 audit[1885]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_unregister_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.365000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdde9df430 a2=0 a3=7ffdde9df41c items=0 ppid=1838 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:43:22.366000 audit[1886]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.366000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe94f130c0 a2=0 a3=7ffe94f130ac items=0 ppid=1838 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:43:22.425358 kernel: Initializing XFRM netlink socket Feb 12 19:43:22.459993 env[1838]: time="2024-02-12T19:43:22.459953779Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:43:22.537000 audit[1893]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.537000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff91c600f0 a2=0 a3=7fff91c600dc items=0 ppid=1838 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.537000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 12 19:43:22.564000 audit[1896]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.564000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc959a0a40 a2=0 a3=7ffc959a0a2c items=0 ppid=1838 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.564000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 12 19:43:22.568000 audit[1899]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.568000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff10b24e40 a2=0 a3=7fff10b24e2c items=0 ppid=1838 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.568000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 12 19:43:22.570000 audit[1901]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.570000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff70c57fb0 a2=0 a3=7fff70c57f9c items=0 ppid=1838 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.570000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 12 19:43:22.572000 audit[1903]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.572000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcaee35990 a2=0 a3=7ffcaee3597c items=0 ppid=1838 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 12 19:43:22.574000 audit[1905]: NETFILTER_CFG table=nat:22 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.574000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffddb7fbe80 a2=0 a3=7ffddb7fbe6c items=0 ppid=1838 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.574000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 12 19:43:22.576000 audit[1907]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.576000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffd7967700 a2=0 a3=7fffd79676ec items=0 ppid=1838 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.576000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 12 19:43:22.578000 audit[1909]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.578000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc8713a3e0 a2=0 a3=7ffc8713a3cc items=0 ppid=1838 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.578000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 12 19:43:22.581000 audit[1911]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.581000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdbfc74ae0 a2=0 a3=7ffdbfc74acc items=0 ppid=1838 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.581000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 19:43:22.582000 audit[1913]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.582000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdc1c59460 a2=0 a3=7ffdc1c5944c items=0 ppid=1838 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.582000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 19:43:22.584000 audit[1915]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.584000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe676290b0 a2=0 a3=7ffe6762909c items=0 ppid=1838 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.584000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 12 19:43:22.585570 systemd-networkd[1592]: docker0: Link UP Feb 12 19:43:22.600000 audit[1919]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_unregister_rule pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.600000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe55685e30 a2=0 a3=7ffe55685e1c items=0 ppid=1838 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.600000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:43:22.601000 audit[1920]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:22.601000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeff5dc720 a2=0 a3=7ffeff5dc70c items=0 ppid=1838 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:43:22.602027 env[1838]: time="2024-02-12T19:43:22.601989902Z" level=info msg="Loading containers: done." Feb 12 19:43:22.613821 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3285506069-merged.mount: Deactivated successfully. Feb 12 19:43:22.642106 env[1838]: time="2024-02-12T19:43:22.642070437Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:43:22.642304 env[1838]: time="2024-02-12T19:43:22.642281037Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:43:22.642426 env[1838]: time="2024-02-12T19:43:22.642405637Z" level=info msg="Daemon has completed initialization" Feb 12 19:43:22.675490 systemd[1]: Started docker.service. Feb 12 19:43:22.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.676464 env[1838]: time="2024-02-12T19:43:22.676422166Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:43:22.697366 systemd[1]: Reloading. Feb 12 19:43:22.769157 /usr/lib/systemd/system-generators/torcx-generator[1967]: time="2024-02-12T19:43:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:22.769194 /usr/lib/systemd/system-generators/torcx-generator[1967]: time="2024-02-12T19:43:22Z" level=info msg="torcx already run" Feb 12 19:43:22.860510 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:22.860530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:22.877037 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:22.959798 systemd[1]: Started kubelet.service. Feb 12 19:43:22.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:23.028399 kubelet[2034]: E0212 19:43:23.028324 2034 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:23.030126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:23.030327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:23.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:43:27.203067 env[1420]: time="2024-02-12T19:43:27.203006533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:43:27.779196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014844313.mount: Deactivated successfully. Feb 12 19:43:29.956195 env[1420]: time="2024-02-12T19:43:29.956080841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:29.962343 env[1420]: time="2024-02-12T19:43:29.962298566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:29.967370 env[1420]: time="2024-02-12T19:43:29.967316185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:29.971986 env[1420]: time="2024-02-12T19:43:29.971860603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:29.972965 env[1420]: time="2024-02-12T19:43:29.972936708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 19:43:29.982511 env[1420]: time="2024-02-12T19:43:29.982485645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:43:31.958806 env[1420]: time="2024-02-12T19:43:31.958694745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:31.999233 env[1420]: time="2024-02-12T19:43:31.999188942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.006515 env[1420]: time="2024-02-12T19:43:32.006481453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.012664 env[1420]: time="2024-02-12T19:43:32.012630029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.013293 env[1420]: time="2024-02-12T19:43:32.013259947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 19:43:32.023367 env[1420]: time="2024-02-12T19:43:32.023321937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:43:33.160859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:33.176294 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 12 19:43:33.176383 kernel: audit: type=1130 audit(1707767013.160:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.161132 systemd[1]: Stopped kubelet.service. Feb 12 19:43:33.162833 systemd[1]: Started kubelet.service. Feb 12 19:43:33.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.188363 kernel: audit: type=1131 audit(1707767013.160:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.208366 kernel: audit: type=1130 audit(1707767013.160:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:33.246059 env[1420]: time="2024-02-12T19:43:33.244912358Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.249346 kubelet[2062]: E0212 19:43:33.249298 2062 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:33.251842 env[1420]: time="2024-02-12T19:43:33.251805551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:43:33.253210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:33.253408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:33.267396 kernel: audit: type=1131 audit(1707767013.253:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:43:33.268052 env[1420]: time="2024-02-12T19:43:33.268007604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.274194 env[1420]: time="2024-02-12T19:43:33.274143075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.274877 env[1420]: time="2024-02-12T19:43:33.274846395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 19:43:33.284108 env[1420]: time="2024-02-12T19:43:33.284083453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:43:34.435214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966106528.mount: Deactivated successfully. Feb 12 19:43:34.917962 env[1420]: time="2024-02-12T19:43:34.917904925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.923735 env[1420]: time="2024-02-12T19:43:34.923678282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.927786 env[1420]: time="2024-02-12T19:43:34.927758693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.933929 env[1420]: time="2024-02-12T19:43:34.933900960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.934328 env[1420]: time="2024-02-12T19:43:34.934299871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:43:34.943167 env[1420]: time="2024-02-12T19:43:34.943140211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:43:35.433933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161589714.mount: Deactivated successfully. Feb 12 19:43:35.457249 env[1420]: time="2024-02-12T19:43:35.457209246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.467382 env[1420]: time="2024-02-12T19:43:35.467290713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.472757 env[1420]: time="2024-02-12T19:43:35.472639554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.477583 env[1420]: time="2024-02-12T19:43:35.477550484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.478040 env[1420]: time="2024-02-12T19:43:35.478012196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:43:35.487286 env[1420]: time="2024-02-12T19:43:35.487262041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:43:36.210813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548402279.mount: Deactivated successfully. Feb 12 19:43:40.348670 env[1420]: time="2024-02-12T19:43:40.348620534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.356828 env[1420]: time="2024-02-12T19:43:40.356788822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.361831 env[1420]: time="2024-02-12T19:43:40.361793237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.367499 env[1420]: time="2024-02-12T19:43:40.367451968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.367794 env[1420]: time="2024-02-12T19:43:40.367746475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 19:43:40.377457 env[1420]: time="2024-02-12T19:43:40.377429098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:43:40.930760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893617504.mount: Deactivated successfully. Feb 12 19:43:41.553677 env[1420]: time="2024-02-12T19:43:41.553625981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:41.567234 env[1420]: time="2024-02-12T19:43:41.567192385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:41.574392 env[1420]: time="2024-02-12T19:43:41.574362946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:41.579758 env[1420]: time="2024-02-12T19:43:41.579663665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:41.580479 env[1420]: time="2024-02-12T19:43:41.580447483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 19:43:43.410809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:43:43.411047 systemd[1]: Stopped kubelet.service. Feb 12 19:43:43.413064 systemd[1]: Started kubelet.service. Feb 12 19:43:43.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.430352 kernel: audit: type=1130 audit(1707767023.409:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.460297 kernel: audit: type=1131 audit(1707767023.409:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.460387 kernel: audit: type=1130 audit(1707767023.411:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:43.525269 kubelet[2142]: E0212 19:43:43.525218 2142 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:43.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:43:43.545381 kernel: audit: type=1131 audit(1707767023.526:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:43:43.527075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:43.527260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:44.227032 systemd[1]: Stopped kubelet.service. Feb 12 19:43:44.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.249205 systemd[1]: Reloading. Feb 12 19:43:44.257724 kernel: audit: type=1130 audit(1707767024.225:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.257791 kernel: audit: type=1131 audit(1707767024.228:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.339688 /usr/lib/systemd/system-generators/torcx-generator[2176]: time="2024-02-12T19:43:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:44.349098 /usr/lib/systemd/system-generators/torcx-generator[2176]: time="2024-02-12T19:43:44Z" level=info msg="torcx already run" Feb 12 19:43:44.420890 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:44.420911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:44.437064 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:44.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.526770 systemd[1]: Started kubelet.service. Feb 12 19:43:44.543360 kernel: audit: type=1130 audit(1707767024.525:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:44.591311 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:44.591311 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:44.591761 kubelet[2242]: I0212 19:43:44.591356 2242 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:44.592627 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:44.592627 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:45.359442 kubelet[2242]: I0212 19:43:45.359406 2242 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:43:45.359442 kubelet[2242]: I0212 19:43:45.359433 2242 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:45.359720 kubelet[2242]: I0212 19:43:45.359701 2242 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:43:45.362625 kubelet[2242]: E0212 19:43:45.362602 2242 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.362803 kubelet[2242]: I0212 19:43:45.362787 2242 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:45.365647 kubelet[2242]: I0212 19:43:45.365627 2242 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:45.365984 kubelet[2242]: I0212 19:43:45.365963 2242 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:45.366069 kubelet[2242]: I0212 19:43:45.366056 2242 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:43:45.366189 kubelet[2242]: I0212 19:43:45.366084 2242 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:43:45.366189 kubelet[2242]: I0212 19:43:45.366099 2242 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:43:45.366276 kubelet[2242]: I0212 19:43:45.366203 2242 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:45.369043 kubelet[2242]: I0212 19:43:45.369025 2242 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:43:45.369156 kubelet[2242]: I0212 19:43:45.369146 2242 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:45.369252 kubelet[2242]: I0212 19:43:45.369242 2242 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:43:45.369329 kubelet[2242]: I0212 19:43:45.369320 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:45.369507 kubelet[2242]: W0212 19:43:45.369465 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c8dbf10a06&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.369580 kubelet[2242]: E0212 19:43:45.369537 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c8dbf10a06&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.370178 kubelet[2242]: W0212 19:43:45.370142 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.370285 kubelet[2242]: E0212 19:43:45.370274 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.370467 kubelet[2242]: I0212 19:43:45.370454 2242 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:45.370801 kubelet[2242]: W0212 19:43:45.370788 2242 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:43:45.371264 kubelet[2242]: I0212 19:43:45.371248 2242 server.go:1186] "Started kubelet" Feb 12 19:43:45.371000 audit[2242]: AVC avc: denied { mac_admin } for pid=2242 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:45.374356 kubelet[2242]: I0212 19:43:45.373008 2242 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:43:45.374356 kubelet[2242]: I0212 19:43:45.373034 2242 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:43:45.374356 kubelet[2242]: I0212 19:43:45.373085 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:45.376052 kubelet[2242]: E0212 19:43:45.375981 2242 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c8dbf10a06.17b3350fba60a6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c8dbf10a06", UID:"ci-3510.3.2-a-c8dbf10a06", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c8dbf10a06"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 45, 371227811, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 45, 371227811, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.35:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.35:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:45.376308 kubelet[2242]: I0212 19:43:45.376294 2242 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:45.376904 kubelet[2242]: I0212 19:43:45.376890 2242 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:43:45.377968 kubelet[2242]: I0212 19:43:45.377955 2242 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:43:45.379328 kubelet[2242]: I0212 19:43:45.379313 2242 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:45.379654 kubelet[2242]: W0212 19:43:45.379630 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.379735 kubelet[2242]: E0212 19:43:45.379727 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.379830 kubelet[2242]: E0212 19:43:45.379818 2242 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-c8dbf10a06\" not found" Feb 12 19:43:45.380058 kubelet[2242]: E0212 19:43:45.380040 2242 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c8dbf10a06?timeout=10s": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.387680 kubelet[2242]: E0212 19:43:45.387666 2242 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:45.387792 kubelet[2242]: E0212 19:43:45.387783 2242 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:45.389356 kernel: audit: type=1400 audit(1707767025.371:194): avc: denied { mac_admin } for pid=2242 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:45.371000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:45.371000 audit[2242]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00087fdd0 a1=c000808dc8 a2=c00087fda0 a3=25 items=0 ppid=1 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.423730 kernel: audit: type=1401 audit(1707767025.371:194): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:45.423818 kernel: audit: type=1300 audit(1707767025.371:194): arch=c000003e syscall=188 success=no exit=-22 a0=c00087fdd0 a1=c000808dc8 a2=c00087fda0 a3=25 items=0 ppid=1 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.371000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:45.371000 audit[2242]: AVC avc: denied { mac_admin } for pid=2242 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:45.371000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:45.371000 audit[2242]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c58460 a1=c000808de0 a2=c00087fe60 a3=25 items=0 ppid=1 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.371000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:45.401000 audit[2252]: NETFILTER_CFG table=mangle:30 family=2 entries=2 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.401000 audit[2252]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcefc6ba30 a2=0 a3=7ffcefc6ba1c items=0 ppid=2242 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:43:45.412000 audit[2253]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.412000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4f866fc0 a2=0 a3=7ffd4f866fac items=0 ppid=2242 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:43:45.435000 audit[2257]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.435000 audit[2257]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3656d790 a2=0 a3=7ffc3656d77c items=0 ppid=2242 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:43:45.437000 audit[2259]: NETFILTER_CFG table=filter:33 family=2 entries=2 op=nft_register_chain pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.437000 audit[2259]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeb41c7d90 a2=0 a3=7ffeb41c7d7c items=0 ppid=2242 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:43:45.493000 audit[2264]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.493000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe57d99a90 a2=0 a3=7ffe57d99a7c items=0 ppid=2242 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 19:43:45.495000 audit[2265]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.495000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6ea917c0 a2=0 a3=7ffd6ea917ac items=0 ppid=2242 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:43:45.520000 audit[2268]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.520000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff61cdaf10 a2=0 a3=7fff61cdaefc items=0 ppid=2242 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:43:45.522462 kubelet[2242]: I0212 19:43:45.522439 2242 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.523190 kubelet[2242]: E0212 19:43:45.522925 2242 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.523696 kubelet[2242]: I0212 19:43:45.523673 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:45.523696 kubelet[2242]: I0212 19:43:45.523693 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:45.523842 kubelet[2242]: I0212 19:43:45.523712 2242 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:45.527000 audit[2271]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.528553 kubelet[2242]: I0212 19:43:45.528508 2242 policy_none.go:49] "None policy: Start" Feb 12 19:43:45.527000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe7a8f44d0 a2=0 a3=7ffe7a8f44bc items=0 ppid=2242 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:43:45.529129 kubelet[2242]: I0212 19:43:45.529113 2242 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:45.529196 kubelet[2242]: I0212 19:43:45.529141 2242 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:45.528000 audit[2272]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.528000 audit[2272]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe301ce150 a2=0 a3=7ffe301ce13c items=0 ppid=2242 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:43:45.529000 audit[2273]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.529000 audit[2273]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7b790220 a2=0 a3=7ffe7b79020c items=0 ppid=2242 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:43:45.535000 audit[2275]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.535000 audit[2275]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff7fb126b0 a2=0 a3=7fff7fb1269c items=0 ppid=2242 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.535000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:43:45.537542 kubelet[2242]: I0212 19:43:45.537524 2242 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:45.536000 audit[2242]: AVC avc: denied { mac_admin } for pid=2242 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:45.536000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:45.536000 audit[2242]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001181200 a1=c000f3ab28 a2=c0011811d0 a3=25 items=0 ppid=1 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.536000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:45.539091 kubelet[2242]: I0212 19:43:45.539073 2242 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:43:45.539370 kubelet[2242]: I0212 19:43:45.539356 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:45.540574 kubelet[2242]: E0212 19:43:45.540552 2242 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-c8dbf10a06\" not found" Feb 12 19:43:45.541000 audit[2278]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_rule pid=2278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.541000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffca1debc80 a2=0 a3=7ffca1debc6c items=0 ppid=2242 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:43:45.544000 audit[2280]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.544000 audit[2280]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe17771040 a2=0 a3=7ffe1777102c items=0 ppid=2242 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:43:45.546000 audit[2282]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.546000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffdcf0c1740 a2=0 a3=7ffdcf0c172c items=0 ppid=2242 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:43:45.548000 audit[2284]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_rule pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.548000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffdbb493130 a2=0 a3=7ffdbb49311c items=0 ppid=2242 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:43:45.550361 kubelet[2242]: I0212 19:43:45.550327 2242 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:43:45.549000 audit[2285]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.549000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff949a6db0 a2=0 a3=7fff949a6d9c items=0 ppid=2242 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:43:45.550000 audit[2286]: NETFILTER_CFG table=mangle:46 family=2 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.550000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0eb951a0 a2=0 a3=7ffe0eb9518c items=0 ppid=2242 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:43:45.551000 audit[2287]: NETFILTER_CFG table=nat:47 family=10 entries=2 op=nft_register_chain pid=2287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.551000 audit[2287]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd48e34dd0 a2=0 a3=7ffd48e34dbc items=0 ppid=2242 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:43:45.552000 audit[2288]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.552000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcab5ad000 a2=0 a3=7ffcab5acfec items=0 ppid=2242 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:43:45.553000 audit[2290]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:43:45.553000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd2107210 a2=0 a3=7ffdd21071fc items=0 ppid=2242 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:43:45.554000 audit[2291]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.554000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcdbbdfa40 a2=0 a3=7ffcdbbdfa2c items=0 ppid=2242 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:43:45.555000 audit[2292]: NETFILTER_CFG table=filter:51 family=10 entries=2 op=nft_register_chain pid=2292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.555000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdadc6bf90 a2=0 a3=7ffdadc6bf7c items=0 ppid=2242 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:43:45.557000 audit[2294]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.557000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffec8f04640 a2=0 a3=7ffec8f0462c items=0 ppid=2242 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:43:45.558000 audit[2295]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.558000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbd9c7f00 a2=0 a3=7fffbd9c7eec items=0 ppid=2242 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:43:45.559000 audit[2296]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.559000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0d0fb360 a2=0 a3=7ffd0d0fb34c items=0 ppid=2242 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:43:45.561000 audit[2298]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.561000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffde31ca2b0 a2=0 a3=7ffde31ca29c items=0 ppid=2242 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:43:45.563000 audit[2300]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.563000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffbc38ff50 a2=0 a3=7fffbc38ff3c items=0 ppid=2242 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:43:45.565000 audit[2302]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_rule pid=2302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.565000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffea4897800 a2=0 a3=7ffea48977ec items=0 ppid=2242 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:43:45.567000 audit[2304]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_rule pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.567000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe7a54caa0 a2=0 a3=7ffe7a54ca8c items=0 ppid=2242 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:43:45.581051 kubelet[2242]: E0212 19:43:45.580946 2242 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c8dbf10a06?timeout=10s": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.587000 audit[2306]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.587000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffd2e8ab100 a2=0 a3=7ffd2e8ab0ec items=0 ppid=2242 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:43:45.589388 kubelet[2242]: I0212 19:43:45.589324 2242 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:43:45.589388 kubelet[2242]: I0212 19:43:45.589368 2242 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:43:45.589497 kubelet[2242]: I0212 19:43:45.589391 2242 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:43:45.589497 kubelet[2242]: E0212 19:43:45.589439 2242 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:43:45.590100 kubelet[2242]: W0212 19:43:45.590054 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.590100 kubelet[2242]: E0212 19:43:45.590102 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:45.589000 audit[2307]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.589000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc85504c0 a2=0 a3=7fffc85504ac items=0 ppid=2242 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:43:45.591000 audit[2308]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.591000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2f418ea0 a2=0 a3=7ffd2f418e8c items=0 ppid=2242 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:43:45.592000 audit[2309]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:43:45.592000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe44676cd0 a2=0 a3=7ffe44676cbc items=0 ppid=2242 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:45.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:43:45.689690 kubelet[2242]: I0212 19:43:45.689582 2242 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:45.691717 kubelet[2242]: I0212 19:43:45.691694 2242 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:45.693133 kubelet[2242]: I0212 19:43:45.693108 2242 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:45.695180 kubelet[2242]: I0212 19:43:45.695152 2242 status_manager.go:698] "Failed to get status for pod" podUID=4755c13b85c02de6b9d45464525c2535 pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" err="Get \"https://10.200.8.35:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-c8dbf10a06\": dial tcp 10.200.8.35:6443: connect: connection refused" Feb 12 19:43:45.696719 kubelet[2242]: I0212 19:43:45.696696 2242 status_manager.go:698] "Failed to get status for pod" podUID=49f092a17f47a2018e441962689aaa8a pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" err="Get \"https://10.200.8.35:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\": dial tcp 10.200.8.35:6443: connect: connection refused" Feb 12 19:43:45.700407 kubelet[2242]: I0212 19:43:45.700389 2242 status_manager.go:698] "Failed to get status for pod" podUID=8a0ed12dbcdc06bab3e71b2aaa1c2403 pod="kube-system/kube-scheduler-ci-3510.3.2-a-c8dbf10a06" err="Get \"https://10.200.8.35:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-c8dbf10a06\": dial tcp 10.200.8.35:6443: connect: connection refused" Feb 12 19:43:45.724782 kubelet[2242]: I0212 19:43:45.724760 2242 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.725052 kubelet[2242]: E0212 19:43:45.725026 2242 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.781971 kubelet[2242]: I0212 19:43:45.781878 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782247 kubelet[2242]: I0212 19:43:45.782002 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782247 kubelet[2242]: I0212 19:43:45.782093 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782247 kubelet[2242]: I0212 19:43:45.782187 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782627 kubelet[2242]: I0212 19:43:45.782311 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782627 kubelet[2242]: I0212 19:43:45.782435 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782627 kubelet[2242]: I0212 19:43:45.782478 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782627 kubelet[2242]: I0212 19:43:45.782519 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a0ed12dbcdc06bab3e71b2aaa1c2403-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-c8dbf10a06\" (UID: \"8a0ed12dbcdc06bab3e71b2aaa1c2403\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.782627 kubelet[2242]: I0212 19:43:45.782556 2242 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:45.982823 kubelet[2242]: E0212 19:43:45.982724 2242 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c8dbf10a06?timeout=10s": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.000172 env[1420]: time="2024-02-12T19:43:46.000132795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-c8dbf10a06,Uid:4755c13b85c02de6b9d45464525c2535,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:46.002671 env[1420]: time="2024-02-12T19:43:46.002633144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-c8dbf10a06,Uid:8a0ed12dbcdc06bab3e71b2aaa1c2403,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:46.002929 env[1420]: time="2024-02-12T19:43:46.002904150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-c8dbf10a06,Uid:49f092a17f47a2018e441962689aaa8a,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:46.127741 kubelet[2242]: I0212 19:43:46.127700 2242 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:46.128099 kubelet[2242]: E0212 19:43:46.128070 2242 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:46.185997 kubelet[2242]: W0212 19:43:46.185945 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.185997 kubelet[2242]: E0212 19:43:46.186001 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.270972 kubelet[2242]: W0212 19:43:46.270866 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.270972 kubelet[2242]: E0212 19:43:46.270923 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.376295 kubelet[2242]: W0212 19:43:46.376233 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c8dbf10a06&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.376495 kubelet[2242]: E0212 19:43:46.376306 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c8dbf10a06&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.783613 kubelet[2242]: E0212 19:43:46.783561 2242 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c8dbf10a06?timeout=10s": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.813140 kubelet[2242]: W0212 19:43:46.813107 2242 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.813140 kubelet[2242]: E0212 19:43:46.813147 2242 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:46.930430 kubelet[2242]: I0212 19:43:46.930401 2242 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:46.930767 kubelet[2242]: E0212 19:43:46.930746 2242 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:46.998990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955789653.mount: Deactivated successfully. Feb 12 19:43:47.036793 env[1420]: time="2024-02-12T19:43:47.036366132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.041552 env[1420]: time="2024-02-12T19:43:47.041486830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.057131 env[1420]: time="2024-02-12T19:43:47.057095629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.061210 env[1420]: time="2024-02-12T19:43:47.061163407Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.065717 env[1420]: time="2024-02-12T19:43:47.065672793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.071396 env[1420]: time="2024-02-12T19:43:47.071366502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.076949 env[1420]: time="2024-02-12T19:43:47.076915608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.079816 env[1420]: time="2024-02-12T19:43:47.079783663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.087214 env[1420]: time="2024-02-12T19:43:47.087182505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.092043 env[1420]: time="2024-02-12T19:43:47.092010297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.107326 env[1420]: time="2024-02-12T19:43:47.107291989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.123542 env[1420]: time="2024-02-12T19:43:47.123453999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:47.156536 env[1420]: time="2024-02-12T19:43:47.154208987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:47.156536 env[1420]: time="2024-02-12T19:43:47.154246388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:47.156536 env[1420]: time="2024-02-12T19:43:47.154256488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:47.156536 env[1420]: time="2024-02-12T19:43:47.154424491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d56b04cdb890bc9394b13a39391f34f31fc133425512523bcf7ad046d3663404 pid=2319 runtime=io.containerd.runc.v2 Feb 12 19:43:47.174659 env[1420]: time="2024-02-12T19:43:47.174602777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:47.174824 env[1420]: time="2024-02-12T19:43:47.174647978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:47.174824 env[1420]: time="2024-02-12T19:43:47.174661778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:47.174824 env[1420]: time="2024-02-12T19:43:47.174782781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf pid=2339 runtime=io.containerd.runc.v2 Feb 12 19:43:47.204923 env[1420]: time="2024-02-12T19:43:47.204845156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:47.205096 env[1420]: time="2024-02-12T19:43:47.204931458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:47.205096 env[1420]: time="2024-02-12T19:43:47.204958158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:47.205207 env[1420]: time="2024-02-12T19:43:47.205091161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188 pid=2380 runtime=io.containerd.runc.v2 Feb 12 19:43:47.259818 env[1420]: time="2024-02-12T19:43:47.259773707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-c8dbf10a06,Uid:4755c13b85c02de6b9d45464525c2535,Namespace:kube-system,Attempt:0,} returns sandbox id \"d56b04cdb890bc9394b13a39391f34f31fc133425512523bcf7ad046d3663404\"" Feb 12 19:43:47.264650 env[1420]: time="2024-02-12T19:43:47.264614300Z" level=info msg="CreateContainer within sandbox \"d56b04cdb890bc9394b13a39391f34f31fc133425512523bcf7ad046d3663404\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:43:47.296879 env[1420]: time="2024-02-12T19:43:47.295810097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-c8dbf10a06,Uid:8a0ed12dbcdc06bab3e71b2aaa1c2403,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf\"" Feb 12 19:43:47.299964 env[1420]: time="2024-02-12T19:43:47.299935276Z" level=info msg="CreateContainer within sandbox \"a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:43:47.311388 env[1420]: time="2024-02-12T19:43:47.311243492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-c8dbf10a06,Uid:49f092a17f47a2018e441962689aaa8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188\"" Feb 12 19:43:47.311905 env[1420]: time="2024-02-12T19:43:47.311873004Z" level=info msg="CreateContainer within sandbox \"d56b04cdb890bc9394b13a39391f34f31fc133425512523bcf7ad046d3663404\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc879c88dea601a2eb275eb0e17358321a182fe640953ed5d12c759f2a901b1c\"" Feb 12 19:43:47.312945 env[1420]: time="2024-02-12T19:43:47.312917824Z" level=info msg="StartContainer for \"cc879c88dea601a2eb275eb0e17358321a182fe640953ed5d12c759f2a901b1c\"" Feb 12 19:43:47.315112 env[1420]: time="2024-02-12T19:43:47.315038165Z" level=info msg="CreateContainer within sandbox \"97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:43:47.355809 kubelet[2242]: E0212 19:43:47.355701 2242 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c8dbf10a06.17b3350fba60a6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c8dbf10a06", UID:"ci-3510.3.2-a-c8dbf10a06", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c8dbf10a06"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 45, 371227811, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 45, 371227811, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.35:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.35:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:47.363681 env[1420]: time="2024-02-12T19:43:47.363636294Z" level=info msg="CreateContainer within sandbox \"a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d\"" Feb 12 19:43:47.364076 env[1420]: time="2024-02-12T19:43:47.364054602Z" level=info msg="StartContainer for \"2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d\"" Feb 12 19:43:47.373366 env[1420]: time="2024-02-12T19:43:47.373307180Z" level=info msg="CreateContainer within sandbox \"97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f\"" Feb 12 19:43:47.373835 env[1420]: time="2024-02-12T19:43:47.373801289Z" level=info msg="StartContainer for \"c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f\"" Feb 12 19:43:47.440413 env[1420]: time="2024-02-12T19:43:47.440365963Z" level=info msg="StartContainer for \"cc879c88dea601a2eb275eb0e17358321a182fe640953ed5d12c759f2a901b1c\" returns successfully" Feb 12 19:43:47.453344 kubelet[2242]: E0212 19:43:47.453299 2242 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 12 19:43:47.500016 env[1420]: time="2024-02-12T19:43:47.499968403Z" level=info msg="StartContainer for \"2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d\" returns successfully" Feb 12 19:43:47.518020 env[1420]: time="2024-02-12T19:43:47.517968148Z" level=info msg="StartContainer for \"c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f\" returns successfully" Feb 12 19:43:48.532798 kubelet[2242]: I0212 19:43:48.532773 2242 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:50.132030 kubelet[2242]: E0212 19:43:50.131993 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-c8dbf10a06\" not found" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:50.148167 kubelet[2242]: I0212 19:43:50.148131 2242 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:50.372040 kubelet[2242]: I0212 19:43:50.372003 2242 apiserver.go:52] "Watching apiserver" Feb 12 19:43:50.380011 kubelet[2242]: I0212 19:43:50.379987 2242 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:50.417049 kubelet[2242]: I0212 19:43:50.416943 2242 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:43:53.495983 systemd[1]: Reloading. Feb 12 19:43:53.591859 /usr/lib/systemd/system-generators/torcx-generator[2579]: time="2024-02-12T19:43:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:53.591895 /usr/lib/systemd/system-generators/torcx-generator[2579]: time="2024-02-12T19:43:53Z" level=info msg="torcx already run" Feb 12 19:43:53.684877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:53.684897 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:53.701116 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:53.804046 systemd[1]: Stopping kubelet.service... Feb 12 19:43:53.804737 kubelet[2242]: I0212 19:43:53.804318 2242 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:53.822051 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:43:53.822484 systemd[1]: Stopped kubelet.service. Feb 12 19:43:53.842945 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 12 19:43:53.843033 kernel: audit: type=1131 audit(1707767033.821:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:53.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:53.824300 systemd[1]: Started kubelet.service. Feb 12 19:43:53.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:53.866416 kernel: audit: type=1130 audit(1707767033.823:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:53.919141 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:53.919575 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:53.919721 kubelet[2648]: I0212 19:43:53.919693 2648 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:53.921770 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:53.921899 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:53.926771 kubelet[2648]: I0212 19:43:53.926743 2648 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:43:53.926771 kubelet[2648]: I0212 19:43:53.926762 2648 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:53.927014 kubelet[2648]: I0212 19:43:53.926996 2648 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:43:53.928211 kubelet[2648]: I0212 19:43:53.928178 2648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:43:53.930050 kubelet[2648]: I0212 19:43:53.930006 2648 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:53.932373 kubelet[2648]: I0212 19:43:53.932324 2648 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:53.932754 kubelet[2648]: I0212 19:43:53.932733 2648 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:53.932823 kubelet[2648]: I0212 19:43:53.932808 2648 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:43:53.932928 kubelet[2648]: I0212 19:43:53.932831 2648 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:43:53.932928 kubelet[2648]: I0212 19:43:53.932844 2648 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:43:53.932928 kubelet[2648]: I0212 19:43:53.932883 2648 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:53.935961 kubelet[2648]: I0212 19:43:53.935949 2648 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:43:53.936065 kubelet[2648]: I0212 19:43:53.936056 2648 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:53.936172 kubelet[2648]: I0212 19:43:53.936163 2648 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:43:53.936247 kubelet[2648]: I0212 19:43:53.936238 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:53.942353 kubelet[2648]: I0212 19:43:53.942330 2648 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:53.942843 kubelet[2648]: I0212 19:43:53.942829 2648 server.go:1186] "Started kubelet" Feb 12 19:43:53.944860 kubelet[2648]: I0212 19:43:53.944843 2648 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:43:53.944988 kubelet[2648]: I0212 19:43:53.944966 2648 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:43:53.945080 kubelet[2648]: I0212 19:43:53.945072 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:53.943000 audit[2648]: AVC avc: denied { mac_admin } for pid=2648 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:53.948092 kubelet[2648]: I0212 19:43:53.948079 2648 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:53.948677 kubelet[2648]: I0212 19:43:53.948661 2648 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:43:53.957440 kubelet[2648]: I0212 19:43:53.957421 2648 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:43:53.963221 kubelet[2648]: I0212 19:43:53.963191 2648 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:53.963344 kernel: audit: type=1400 audit(1707767033.943:232): avc: denied { mac_admin } for pid=2648 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:53.963442 kubelet[2648]: E0212 19:43:53.958243 2648 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:53.963527 kubelet[2648]: E0212 19:43:53.963518 2648 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:53.943000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:53.977406 kernel: audit: type=1401 audit(1707767033.943:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:53.943000 audit[2648]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000eda420 a1=c000e71560 a2=c000eda3f0 a3=25 items=0 ppid=1 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:54.000373 kernel: audit: type=1300 audit(1707767033.943:232): arch=c000003e syscall=188 success=no exit=-22 a0=c000eda420 a1=c000e71560 a2=c000eda3f0 a3=25 items=0 ppid=1 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:53.943000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:54.019440 kubelet[2648]: I0212 19:43:54.019425 2648 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:43:54.022392 kernel: audit: type=1327 audit(1707767033.943:232): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:53.943000 audit[2648]: AVC avc: denied { mac_admin } for pid=2648 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:54.039356 kernel: audit: type=1400 audit(1707767033.943:233): avc: denied { mac_admin } for pid=2648 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:53.943000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:54.049508 kernel: audit: type=1401 audit(1707767033.943:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:54.049573 kernel: audit: type=1300 audit(1707767033.943:233): arch=c000003e syscall=188 success=no exit=-22 a0=c000e86b40 a1=c000e71578 a2=c000eda4b0 a3=25 items=0 ppid=1 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:53.943000 audit[2648]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e86b40 a1=c000e71578 a2=c000eda4b0 a3=25 items=0 ppid=1 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:54.061121 kubelet[2648]: I0212 19:43:54.059839 2648 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:43:54.061233 kubelet[2648]: I0212 19:43:54.061222 2648 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:43:54.061397 kubelet[2648]: I0212 19:43:54.061388 2648 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:43:54.061511 kubelet[2648]: E0212 19:43:54.061500 2648 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:43:53.943000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:54.085679 kubelet[2648]: I0212 19:43:54.085654 2648 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.093383 kernel: audit: type=1327 audit(1707767033.943:233): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:54.105858 kubelet[2648]: I0212 19:43:54.105821 2648 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.106095 kubelet[2648]: I0212 19:43:54.106083 2648 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.131464 kubelet[2648]: I0212 19:43:54.131451 2648 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:54.131547 kubelet[2648]: I0212 19:43:54.131541 2648 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:54.131626 kubelet[2648]: I0212 19:43:54.131620 2648 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:54.131794 kubelet[2648]: I0212 19:43:54.131786 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:43:54.131862 kubelet[2648]: I0212 19:43:54.131856 2648 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:43:54.131913 kubelet[2648]: I0212 19:43:54.131907 2648 policy_none.go:49] "None policy: Start" Feb 12 19:43:54.132594 kubelet[2648]: I0212 19:43:54.132581 2648 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:54.132677 kubelet[2648]: I0212 19:43:54.132602 2648 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:54.132757 kubelet[2648]: I0212 19:43:54.132739 2648 state_mem.go:75] "Updated machine memory state" Feb 12 19:43:54.133926 kubelet[2648]: I0212 19:43:54.133905 2648 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:54.133000 audit[2648]: AVC avc: denied { mac_admin } for pid=2648 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:43:54.133000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:43:54.133000 audit[2648]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000eda630 a1=c000e71758 a2=c000eda600 a3=25 items=0 ppid=1 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:54.133000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:43:54.135082 kubelet[2648]: I0212 19:43:54.134963 2648 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:43:54.135786 kubelet[2648]: I0212 19:43:54.135360 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:54.162592 kubelet[2648]: I0212 19:43:54.162547 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.162980 kubelet[2648]: I0212 19:43:54.162941 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.163422 kubelet[2648]: I0212 19:43:54.163396 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.171562 kubelet[2648]: E0212 19:43:54.171151 2648 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177447 kubelet[2648]: I0212 19:43:54.177429 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177541 kubelet[2648]: I0212 19:43:54.177468 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177541 kubelet[2648]: I0212 19:43:54.177500 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177541 kubelet[2648]: I0212 19:43:54.177531 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a0ed12dbcdc06bab3e71b2aaa1c2403-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-c8dbf10a06\" (UID: \"8a0ed12dbcdc06bab3e71b2aaa1c2403\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177669 kubelet[2648]: I0212 19:43:54.177569 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177669 kubelet[2648]: I0212 19:43:54.177608 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4755c13b85c02de6b9d45464525c2535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" (UID: \"4755c13b85c02de6b9d45464525c2535\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177669 kubelet[2648]: I0212 19:43:54.177639 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177845 kubelet[2648]: I0212 19:43:54.177671 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.177845 kubelet[2648]: I0212 19:43:54.177704 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f092a17f47a2018e441962689aaa8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" (UID: \"49f092a17f47a2018e441962689aaa8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:54.942095 kubelet[2648]: I0212 19:43:54.942047 2648 apiserver.go:52] "Watching apiserver" Feb 12 19:43:54.964257 kubelet[2648]: I0212 19:43:54.964223 2648 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:54.983501 kubelet[2648]: I0212 19:43:54.983469 2648 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:43:55.143809 kubelet[2648]: E0212 19:43:55.143778 2648 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-c8dbf10a06\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:55.544782 kubelet[2648]: E0212 19:43:55.544749 2648 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-c8dbf10a06\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:55.743538 kubelet[2648]: E0212 19:43:55.743504 2648 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-c8dbf10a06\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" Feb 12 19:43:56.357266 kubelet[2648]: I0212 19:43:56.357211 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-c8dbf10a06" podStartSLOduration=2.357152 pod.CreationTimestamp="2024-02-12 19:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:55.953993946 +0000 UTC m=+2.124821782" watchObservedRunningTime="2024-02-12 19:43:56.357152 +0000 UTC m=+2.527979936" Feb 12 19:43:56.758011 kubelet[2648]: I0212 19:43:56.757978 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c8dbf10a06" podStartSLOduration=2.757899098 pod.CreationTimestamp="2024-02-12 19:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:56.36177057 +0000 UTC m=+2.532598406" watchObservedRunningTime="2024-02-12 19:43:56.757899098 +0000 UTC m=+2.928727034" Feb 12 19:43:59.497996 sudo[1823]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:59.496000 audit[1823]: USER_END pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:59.503043 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 12 19:43:59.503119 kernel: audit: type=1106 audit(1707767039.496:235): pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:59.496000 audit[1823]: CRED_DISP pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:59.534176 kernel: audit: type=1104 audit(1707767039.496:236): pid=1823 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:43:59.597964 sshd[1819]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:59.597000 audit[1819]: USER_END pid=1819 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:59.600797 systemd[1]: sshd@6-10.200.8.35:22-10.200.12.6:43500.service: Deactivated successfully. Feb 12 19:43:59.608631 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:43:59.615561 systemd-logind[1403]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:43:59.616816 systemd-logind[1403]: Removed session 9. Feb 12 19:43:59.597000 audit[1819]: CRED_DISP pid=1819 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:59.621367 kernel: audit: type=1106 audit(1707767039.597:237): pid=1819 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:59.621412 kernel: audit: type=1104 audit(1707767039.597:238): pid=1819 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:43:59.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.35:22-10.200.12.6:43500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:59.651517 kernel: audit: type=1131 audit(1707767039.599:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.35:22-10.200.12.6:43500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:02.528483 kubelet[2648]: I0212 19:44:02.528077 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-c8dbf10a06" podStartSLOduration=11.528022208 pod.CreationTimestamp="2024-02-12 19:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:56.760894644 +0000 UTC m=+2.931722580" watchObservedRunningTime="2024-02-12 19:44:02.528022208 +0000 UTC m=+8.698850044" Feb 12 19:44:05.137909 kubelet[2648]: I0212 19:44:05.137878 2648 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:44:05.139070 env[1420]: time="2024-02-12T19:44:05.138965653Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:44:05.141500 kubelet[2648]: I0212 19:44:05.141473 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:44:06.022091 kubelet[2648]: I0212 19:44:06.022050 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:06.056407 kubelet[2648]: I0212 19:44:06.056372 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmvj\" (UniqueName: \"kubernetes.io/projected/0b8d4253-f4af-4196-8047-a0367610727c-kube-api-access-5rmvj\") pod \"kube-proxy-swgtj\" (UID: \"0b8d4253-f4af-4196-8047-a0367610727c\") " pod="kube-system/kube-proxy-swgtj" Feb 12 19:44:06.056596 kubelet[2648]: I0212 19:44:06.056436 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b8d4253-f4af-4196-8047-a0367610727c-kube-proxy\") pod \"kube-proxy-swgtj\" (UID: \"0b8d4253-f4af-4196-8047-a0367610727c\") " pod="kube-system/kube-proxy-swgtj" Feb 12 19:44:06.056596 kubelet[2648]: I0212 19:44:06.056465 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b8d4253-f4af-4196-8047-a0367610727c-xtables-lock\") pod \"kube-proxy-swgtj\" (UID: \"0b8d4253-f4af-4196-8047-a0367610727c\") " pod="kube-system/kube-proxy-swgtj" Feb 12 19:44:06.056596 kubelet[2648]: I0212 19:44:06.056506 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b8d4253-f4af-4196-8047-a0367610727c-lib-modules\") pod \"kube-proxy-swgtj\" (UID: \"0b8d4253-f4af-4196-8047-a0367610727c\") " pod="kube-system/kube-proxy-swgtj" Feb 12 19:44:06.144247 kubelet[2648]: I0212 19:44:06.144206 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:06.157113 kubelet[2648]: I0212 19:44:06.157088 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02fbe200-4291-43bc-b3f8-a825331b5fb9-var-lib-calico\") pod \"tigera-operator-cfc98749c-wbs59\" (UID: \"02fbe200-4291-43bc-b3f8-a825331b5fb9\") " pod="tigera-operator/tigera-operator-cfc98749c-wbs59" Feb 12 19:44:06.157401 kubelet[2648]: I0212 19:44:06.157385 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qkf\" (UniqueName: \"kubernetes.io/projected/02fbe200-4291-43bc-b3f8-a825331b5fb9-kube-api-access-97qkf\") pod \"tigera-operator-cfc98749c-wbs59\" (UID: \"02fbe200-4291-43bc-b3f8-a825331b5fb9\") " pod="tigera-operator/tigera-operator-cfc98749c-wbs59" Feb 12 19:44:06.328061 env[1420]: time="2024-02-12T19:44:06.327597154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swgtj,Uid:0b8d4253-f4af-4196-8047-a0367610727c,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:06.450155 env[1420]: time="2024-02-12T19:44:06.449901921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-wbs59,Uid:02fbe200-4291-43bc-b3f8-a825331b5fb9,Namespace:tigera-operator,Attempt:0,}" Feb 12 19:44:06.685045 env[1420]: time="2024-02-12T19:44:06.684979942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:06.685225 env[1420]: time="2024-02-12T19:44:06.685014542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:06.685225 env[1420]: time="2024-02-12T19:44:06.685049643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:06.690406 env[1420]: time="2024-02-12T19:44:06.685436647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5311ba6c59f9d3e19d3d78663a79791a13aeb8104dab9d197416f82a87fe34f0 pid=2753 runtime=io.containerd.runc.v2 Feb 12 19:44:06.729461 env[1420]: time="2024-02-12T19:44:06.729416175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swgtj,Uid:0b8d4253-f4af-4196-8047-a0367610727c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5311ba6c59f9d3e19d3d78663a79791a13aeb8104dab9d197416f82a87fe34f0\"" Feb 12 19:44:06.771077 env[1420]: time="2024-02-12T19:44:06.771032374Z" level=info msg="CreateContainer within sandbox \"5311ba6c59f9d3e19d3d78663a79791a13aeb8104dab9d197416f82a87fe34f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:44:06.924622 env[1420]: time="2024-02-12T19:44:06.924544716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:06.924622 env[1420]: time="2024-02-12T19:44:06.924583316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:06.924622 env[1420]: time="2024-02-12T19:44:06.924599717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:06.925037 env[1420]: time="2024-02-12T19:44:06.924990421Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7 pid=2793 runtime=io.containerd.runc.v2 Feb 12 19:44:06.975980 env[1420]: time="2024-02-12T19:44:06.975546028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-wbs59,Uid:02fbe200-4291-43bc-b3f8-a825331b5fb9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7\"" Feb 12 19:44:06.977247 env[1420]: time="2024-02-12T19:44:06.977215648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 12 19:44:07.219792 env[1420]: time="2024-02-12T19:44:07.219742399Z" level=info msg="CreateContainer within sandbox \"5311ba6c59f9d3e19d3d78663a79791a13aeb8104dab9d197416f82a87fe34f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d33ffa89c0fdc10056a444973a2b0288f432d65e5ee3648ffe8712224171f1f\"" Feb 12 19:44:07.222471 env[1420]: time="2024-02-12T19:44:07.220261405Z" level=info msg="StartContainer for \"7d33ffa89c0fdc10056a444973a2b0288f432d65e5ee3648ffe8712224171f1f\"" Feb 12 19:44:07.286081 env[1420]: time="2024-02-12T19:44:07.285975476Z" level=info msg="StartContainer for \"7d33ffa89c0fdc10056a444973a2b0288f432d65e5ee3648ffe8712224171f1f\" returns successfully" Feb 12 19:44:07.329000 audit[2882]: NETFILTER_CFG table=mangle:63 family=10 entries=1 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.341539 kernel: audit: type=1325 audit(1707767047.329:240): table=mangle:63 family=10 entries=1 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.329000 audit[2882]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc949e4970 a2=0 a3=7ffc949e495c items=0 ppid=2844 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.364421 kernel: audit: type=1300 audit(1707767047.329:240): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc949e4970 a2=0 a3=7ffc949e495c items=0 ppid=2844 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:44:07.329000 audit[2881]: NETFILTER_CFG table=mangle:64 family=2 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.386989 kernel: audit: type=1327 audit(1707767047.329:240): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:44:07.387063 kernel: audit: type=1325 audit(1707767047.329:241): table=mangle:64 family=2 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.387094 kernel: audit: type=1300 audit(1707767047.329:241): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02961540 a2=0 a3=7fff0296152c items=0 ppid=2844 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.329000 audit[2881]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02961540 a2=0 a3=7fff0296152c items=0 ppid=2844 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:44:07.417085 kernel: audit: type=1327 audit(1707767047.329:241): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:44:07.417162 kernel: audit: type=1325 audit(1707767047.329:242): table=nat:65 family=10 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.329000 audit[2883]: NETFILTER_CFG table=nat:65 family=10 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.329000 audit[2883]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8e6c4450 a2=0 a3=7ffd8e6c443c items=0 ppid=2844 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.449090 kernel: audit: type=1300 audit(1707767047.329:242): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8e6c4450 a2=0 a3=7ffd8e6c443c items=0 ppid=2844 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:44:07.334000 audit[2884]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.471223 kernel: audit: type=1327 audit(1707767047.329:242): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:44:07.471305 kernel: audit: type=1325 audit(1707767047.334:243): table=nat:66 family=2 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.334000 audit[2884]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd8ef5fd0 a2=0 a3=7fffd8ef5fbc items=0 ppid=2844 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:44:07.334000 audit[2885]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.334000 audit[2885]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd23f1bee0 a2=0 a3=7ffd23f1becc items=0 ppid=2844 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:44:07.334000 audit[2886]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.334000 audit[2886]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0e7ae3c0 a2=0 a3=7ffd0e7ae3ac items=0 ppid=2844 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:44:07.429000 audit[2887]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.429000 audit[2887]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc196502e0 a2=0 a3=7ffc196502cc items=0 ppid=2844 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:44:07.432000 audit[2889]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=2889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.432000 audit[2889]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe1d2d6da0 a2=0 a3=7ffe1d2d6d8c items=0 ppid=2844 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 19:44:07.436000 audit[2892]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.436000 audit[2892]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1ba68e50 a2=0 a3=7ffd1ba68e3c items=0 ppid=2844 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 19:44:07.437000 audit[2893]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.437000 audit[2893]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee062cbf0 a2=0 a3=7ffee062cbdc items=0 ppid=2844 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:44:07.440000 audit[2895]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.440000 audit[2895]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc6423750 a2=0 a3=7fffc642373c items=0 ppid=2844 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:44:07.441000 audit[2896]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_chain pid=2896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.441000 audit[2896]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd87d0d40 a2=0 a3=7ffdd87d0d2c items=0 ppid=2844 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:44:07.444000 audit[2898]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.444000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd10bf400 a2=0 a3=7ffcd10bf3ec items=0 ppid=2844 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:44:07.449000 audit[2901]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.449000 audit[2901]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff9cb165a0 a2=0 a3=7fff9cb1658c items=0 ppid=2844 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 19:44:07.450000 audit[2902]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.450000 audit[2902]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c703430 a2=0 a3=7ffc0c70341c items=0 ppid=2844 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:44:07.453000 audit[2904]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.453000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3d1cc2c0 a2=0 a3=7ffe3d1cc2ac items=0 ppid=2844 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:44:07.455000 audit[2905]: NETFILTER_CFG table=filter:79 family=2 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.455000 audit[2905]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc12d1fab0 a2=0 a3=7ffc12d1fa9c items=0 ppid=2844 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:44:07.458000 audit[2907]: NETFILTER_CFG table=filter:80 family=2 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.458000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdcddcbb0 a2=0 a3=7fffdcddcb9c items=0 ppid=2844 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:44:07.463000 audit[2910]: NETFILTER_CFG table=filter:81 family=2 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.463000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd7f65ec0 a2=0 a3=7fffd7f65eac items=0 ppid=2844 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:44:07.467000 audit[2913]: NETFILTER_CFG table=filter:82 family=2 entries=1 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.467000 audit[2913]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc84a58af0 a2=0 a3=7ffc84a58adc items=0 ppid=2844 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:44:07.471000 audit[2914]: NETFILTER_CFG table=nat:83 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.471000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6a061280 a2=0 a3=7fff6a06126c items=0 ppid=2844 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:44:07.474000 audit[2916]: NETFILTER_CFG table=nat:84 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.474000 audit[2916]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff5fe47520 a2=0 a3=7fff5fe4750c items=0 ppid=2844 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:44:07.478000 audit[2919]: NETFILTER_CFG table=nat:85 family=2 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:44:07.478000 audit[2919]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6d5821e0 a2=0 a3=7ffd6d5821cc items=0 ppid=2844 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:44:07.589000 audit[2923]: NETFILTER_CFG table=filter:86 family=2 entries=6 op=nft_register_rule pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:07.589000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffeb654fb10 a2=0 a3=7ffeb654fafc items=0 ppid=2844 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:07.632000 audit[2923]: NETFILTER_CFG table=nat:87 family=2 entries=17 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:07.632000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffeb654fb10 a2=0 a3=7ffeb654fafc items=0 ppid=2844 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:07.637000 audit[2927]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.637000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff2c79e70 a2=0 a3=7ffff2c79e5c items=0 ppid=2844 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:44:07.640000 audit[2929]: NETFILTER_CFG table=filter:89 family=10 entries=2 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.640000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe30bd1b10 a2=0 a3=7ffe30bd1afc items=0 ppid=2844 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 19:44:07.643000 audit[2932]: NETFILTER_CFG table=filter:90 family=10 entries=2 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.643000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeafc7eae0 a2=0 a3=7ffeafc7eacc items=0 ppid=2844 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 19:44:07.644000 audit[2933]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.644000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8c4b9570 a2=0 a3=7ffd8c4b955c items=0 ppid=2844 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:44:07.646000 audit[2935]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.646000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbe8f6fe0 a2=0 a3=7fffbe8f6fcc items=0 ppid=2844 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:44:07.647000 audit[2936]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.647000 audit[2936]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3e1ef250 a2=0 a3=7fff3e1ef23c items=0 ppid=2844 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:44:07.650000 audit[2938]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.650000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff80d47ae0 a2=0 a3=7fff80d47acc items=0 ppid=2844 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 19:44:07.653000 audit[2941]: NETFILTER_CFG table=filter:95 family=10 entries=2 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.653000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc7c51f5c0 a2=0 a3=7ffc7c51f5ac items=0 ppid=2844 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:44:07.654000 audit[2942]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.654000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2cf76960 a2=0 a3=7ffe2cf7694c items=0 ppid=2844 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:44:07.657000 audit[2944]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.657000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb582b030 a2=0 a3=7fffb582b01c items=0 ppid=2844 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:44:07.658000 audit[2945]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.658000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc79b0db80 a2=0 a3=7ffc79b0db6c items=0 ppid=2844 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:44:07.660000 audit[2947]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.660000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde8b21740 a2=0 a3=7ffde8b2172c items=0 ppid=2844 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:44:07.665000 audit[2950]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.665000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdca5b8d60 a2=0 a3=7ffdca5b8d4c items=0 ppid=2844 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:44:07.668000 audit[2953]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.668000 audit[2953]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb0b1b1b0 a2=0 a3=7ffeb0b1b19c items=0 ppid=2844 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 19:44:07.669000 audit[2954]: NETFILTER_CFG table=nat:102 family=10 entries=1 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.669000 audit[2954]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1dac1730 a2=0 a3=7ffd1dac171c items=0 ppid=2844 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:44:07.671000 audit[2956]: NETFILTER_CFG table=nat:103 family=10 entries=2 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.671000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdc00ceed0 a2=0 a3=7ffdc00ceebc items=0 ppid=2844 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:44:07.676000 audit[2959]: NETFILTER_CFG table=nat:104 family=10 entries=2 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:44:07.676000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe70cc4da0 a2=0 a3=7ffe70cc4d8c items=0 ppid=2844 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:44:07.681000 audit[2963]: NETFILTER_CFG table=filter:105 family=10 entries=3 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:44:07.681000 audit[2963]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffe947f260 a2=0 a3=7fffe947f24c items=0 ppid=2844 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.681000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:07.682000 audit[2963]: NETFILTER_CFG table=nat:106 family=10 entries=10 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:44:07.682000 audit[2963]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fffe947f260 a2=0 a3=7fffe947f24c items=0 ppid=2844 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:07.682000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:08.122896 kubelet[2648]: I0212 19:44:08.121440 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-swgtj" podStartSLOduration=2.121387143 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:08.12112244 +0000 UTC m=+14.291950376" watchObservedRunningTime="2024-02-12 19:44:08.121387143 +0000 UTC m=+14.292214979" Feb 12 19:44:13.877246 env[1420]: time="2024-02-12T19:44:13.877202862Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:13.927722 env[1420]: time="2024-02-12T19:44:13.927673080Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:13.968679 env[1420]: time="2024-02-12T19:44:13.968593401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:14.019829 env[1420]: time="2024-02-12T19:44:14.019775223Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:14.020319 env[1420]: time="2024-02-12T19:44:14.020273728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 12 19:44:14.027050 env[1420]: time="2024-02-12T19:44:14.027010896Z" level=info msg="CreateContainer within sandbox \"a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 12 19:44:14.323961 env[1420]: time="2024-02-12T19:44:14.323899584Z" level=info msg="CreateContainer within sandbox \"a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e\"" Feb 12 19:44:14.326299 env[1420]: time="2024-02-12T19:44:14.324589391Z" level=info msg="StartContainer for \"d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e\"" Feb 12 19:44:14.395369 env[1420]: time="2024-02-12T19:44:14.388911439Z" level=info msg="StartContainer for \"d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e\" returns successfully" Feb 12 19:44:15.133095 kubelet[2648]: I0212 19:44:15.133057 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-wbs59" podStartSLOduration=-9.22337202772176e+09 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="2024-02-12 19:44:06.976776943 +0000 UTC m=+13.147604879" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:15.132594796 +0000 UTC m=+21.303422732" watchObservedRunningTime="2024-02-12 19:44:15.1330155 +0000 UTC m=+21.303843336" Feb 12 19:44:15.177022 systemd[1]: run-containerd-runc-k8s.io-d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e-runc.PrOdjU.mount: Deactivated successfully. Feb 12 19:44:16.509000 audit[3025]: NETFILTER_CFG table=filter:107 family=2 entries=13 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.514680 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 19:44:16.514752 kernel: audit: type=1325 audit(1707767056.509:284): table=filter:107 family=2 entries=13 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.526355 kernel: audit: type=1300 audit(1707767056.509:284): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffda78c53d0 a2=0 a3=7ffda78c53bc items=0 ppid=2844 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.509000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffda78c53d0 a2=0 a3=7ffda78c53bc items=0 ppid=2844 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.554979 kernel: audit: type=1327 audit(1707767056.509:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.509000 audit[3025]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.566708 kernel: audit: type=1325 audit(1707767056.509:285): table=nat:108 family=2 entries=20 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.566792 kernel: audit: type=1300 audit(1707767056.509:285): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffda78c53d0 a2=0 a3=7ffda78c53bc items=0 ppid=2844 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.509000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffda78c53d0 a2=0 a3=7ffda78c53bc items=0 ppid=2844 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.599177 kernel: audit: type=1327 audit(1707767056.509:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.651403 kubelet[2648]: I0212 19:44:16.651377 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:16.704000 audit[3051]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.717367 kernel: audit: type=1325 audit(1707767056.704:286): table=filter:109 family=2 entries=14 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.704000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff39d48a40 a2=0 a3=7fff39d48a2c items=0 ppid=2844 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.749727 kubelet[2648]: I0212 19:44:16.749688 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:16.750868 kernel: audit: type=1300 audit(1707767056.704:286): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff39d48a40 a2=0 a3=7fff39d48a2c items=0 ppid=2844 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.750947 kernel: audit: type=1327 audit(1707767056.704:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.705000 audit[3051]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.767362 kernel: audit: type=1325 audit(1707767056.705:287): table=nat:110 family=2 entries=20 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:16.705000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff39d48a40 a2=0 a3=7fff39d48a2c items=0 ppid=2844 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:16.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:16.841022 kubelet[2648]: I0212 19:44:16.840982 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-lib-modules\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841022 kubelet[2648]: I0212 19:44:16.841033 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-flexvol-driver-host\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841272 kubelet[2648]: I0212 19:44:16.841061 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-typha-certs\") pod \"calico-typha-75bd9dbb95-tklqs\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " pod="calico-system/calico-typha-75bd9dbb95-tklqs" Feb 12 19:44:16.841272 kubelet[2648]: I0212 19:44:16.841086 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-bin-dir\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841272 kubelet[2648]: I0212 19:44:16.841112 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-tigera-ca-bundle\") pod \"calico-typha-75bd9dbb95-tklqs\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " pod="calico-system/calico-typha-75bd9dbb95-tklqs" Feb 12 19:44:16.841272 kubelet[2648]: I0212 19:44:16.841136 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fe74fe8b-83c6-4009-a71c-e6778b511f42-node-certs\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841272 kubelet[2648]: I0212 19:44:16.841162 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-xtables-lock\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841492 kubelet[2648]: I0212 19:44:16.841186 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-run-calico\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841492 kubelet[2648]: I0212 19:44:16.841219 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-policysync\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841492 kubelet[2648]: I0212 19:44:16.841254 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94mz\" (UniqueName: \"kubernetes.io/projected/fe74fe8b-83c6-4009-a71c-e6778b511f42-kube-api-access-n94mz\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841492 kubelet[2648]: I0212 19:44:16.841309 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vgg\" (UniqueName: \"kubernetes.io/projected/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-kube-api-access-c2vgg\") pod \"calico-typha-75bd9dbb95-tklqs\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " pod="calico-system/calico-typha-75bd9dbb95-tklqs" Feb 12 19:44:16.841492 kubelet[2648]: I0212 19:44:16.841360 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe74fe8b-83c6-4009-a71c-e6778b511f42-tigera-ca-bundle\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841686 kubelet[2648]: I0212 19:44:16.841389 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-lib-calico\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841686 kubelet[2648]: I0212 19:44:16.841418 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-net-dir\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.841686 kubelet[2648]: I0212 19:44:16.841448 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-log-dir\") pod \"calico-node-slrct\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " pod="calico-system/calico-node-slrct" Feb 12 19:44:16.842979 kubelet[2648]: I0212 19:44:16.842955 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:16.843281 kubelet[2648]: E0212 19:44:16.843261 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:16.942513 kubelet[2648]: I0212 19:44:16.942478 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7deb942c-192b-42b5-8511-5e8ab4d0a3b5-registration-dir\") pod \"csi-node-driver-8z7w9\" (UID: \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\") " pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:16.942690 kubelet[2648]: I0212 19:44:16.942577 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7deb942c-192b-42b5-8511-5e8ab4d0a3b5-varrun\") pod \"csi-node-driver-8z7w9\" (UID: \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\") " pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:16.942690 kubelet[2648]: I0212 19:44:16.942614 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7deb942c-192b-42b5-8511-5e8ab4d0a3b5-kubelet-dir\") pod \"csi-node-driver-8z7w9\" (UID: \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\") " pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:16.942690 kubelet[2648]: I0212 19:44:16.942639 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7deb942c-192b-42b5-8511-5e8ab4d0a3b5-socket-dir\") pod \"csi-node-driver-8z7w9\" (UID: \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\") " pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:16.942690 kubelet[2648]: I0212 19:44:16.942671 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7b6\" (UniqueName: \"kubernetes.io/projected/7deb942c-192b-42b5-8511-5e8ab4d0a3b5-kube-api-access-cj7b6\") pod \"csi-node-driver-8z7w9\" (UID: \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\") " pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:16.943855 kubelet[2648]: E0212 19:44:16.943827 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.943855 kubelet[2648]: W0212 19:44:16.943850 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944039 kubelet[2648]: E0212 19:44:16.943884 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.944088 kubelet[2648]: E0212 19:44:16.944070 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.944088 kubelet[2648]: W0212 19:44:16.944080 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944186 kubelet[2648]: E0212 19:44:16.944096 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.944265 kubelet[2648]: E0212 19:44:16.944250 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.944315 kubelet[2648]: W0212 19:44:16.944266 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944315 kubelet[2648]: E0212 19:44:16.944280 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.944540 kubelet[2648]: E0212 19:44:16.944522 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.944540 kubelet[2648]: W0212 19:44:16.944539 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944659 kubelet[2648]: E0212 19:44:16.944554 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.944723 kubelet[2648]: E0212 19:44:16.944710 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.944767 kubelet[2648]: W0212 19:44:16.944724 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944767 kubelet[2648]: E0212 19:44:16.944739 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.944919 kubelet[2648]: E0212 19:44:16.944904 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.944975 kubelet[2648]: W0212 19:44:16.944918 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.944975 kubelet[2648]: E0212 19:44:16.944932 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.945097 kubelet[2648]: E0212 19:44:16.945082 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.945148 kubelet[2648]: W0212 19:44:16.945098 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.945148 kubelet[2648]: E0212 19:44:16.945112 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.945313 kubelet[2648]: E0212 19:44:16.945297 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.945313 kubelet[2648]: W0212 19:44:16.945312 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.945432 kubelet[2648]: E0212 19:44:16.945326 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.945515 kubelet[2648]: E0212 19:44:16.945497 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.945575 kubelet[2648]: W0212 19:44:16.945516 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.945575 kubelet[2648]: E0212 19:44:16.945533 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.945704 kubelet[2648]: E0212 19:44:16.945690 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.945749 kubelet[2648]: W0212 19:44:16.945705 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.945749 kubelet[2648]: E0212 19:44:16.945720 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.945884 kubelet[2648]: E0212 19:44:16.945868 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.945884 kubelet[2648]: W0212 19:44:16.945883 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.946030 kubelet[2648]: E0212 19:44:16.945897 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.946221 kubelet[2648]: E0212 19:44:16.946203 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.946221 kubelet[2648]: W0212 19:44:16.946220 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.946346 kubelet[2648]: E0212 19:44:16.946236 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.947175 kubelet[2648]: E0212 19:44:16.947155 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.947175 kubelet[2648]: W0212 19:44:16.947172 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.947319 kubelet[2648]: E0212 19:44:16.947188 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.947392 kubelet[2648]: E0212 19:44:16.947378 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.947436 kubelet[2648]: W0212 19:44:16.947394 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.947436 kubelet[2648]: E0212 19:44:16.947409 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.947573 kubelet[2648]: E0212 19:44:16.947561 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.947627 kubelet[2648]: W0212 19:44:16.947575 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.947627 kubelet[2648]: E0212 19:44:16.947589 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.947741 kubelet[2648]: E0212 19:44:16.947730 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.947788 kubelet[2648]: W0212 19:44:16.947743 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.947788 kubelet[2648]: E0212 19:44:16.947756 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.947946 kubelet[2648]: E0212 19:44:16.947930 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.947997 kubelet[2648]: W0212 19:44:16.947946 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.947997 kubelet[2648]: E0212 19:44:16.947960 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.960650 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964357 kubelet[2648]: W0212 19:44:16.960663 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.960682 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.960852 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964357 kubelet[2648]: W0212 19:44:16.960859 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.960875 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.961013 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964357 kubelet[2648]: W0212 19:44:16.961020 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.961035 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964357 kubelet[2648]: E0212 19:44:16.961190 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964817 kubelet[2648]: W0212 19:44:16.961198 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961258 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961369 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964817 kubelet[2648]: W0212 19:44:16.961377 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961444 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961542 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964817 kubelet[2648]: W0212 19:44:16.961552 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961620 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.964817 kubelet[2648]: E0212 19:44:16.961707 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.964817 kubelet[2648]: W0212 19:44:16.961714 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.961773 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.961873 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.965221 kubelet[2648]: W0212 19:44:16.961881 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.961953 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.962058 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.965221 kubelet[2648]: W0212 19:44:16.962067 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.962146 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.962251 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.965221 kubelet[2648]: W0212 19:44:16.962260 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.965221 kubelet[2648]: E0212 19:44:16.962328 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962446 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.966665 kubelet[2648]: W0212 19:44:16.962453 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962534 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962623 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.966665 kubelet[2648]: W0212 19:44:16.962631 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962689 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962807 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.966665 kubelet[2648]: W0212 19:44:16.962815 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962874 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.966665 kubelet[2648]: E0212 19:44:16.962961 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967055 kubelet[2648]: W0212 19:44:16.962967 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963024 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963109 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967055 kubelet[2648]: W0212 19:44:16.963116 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963174 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963261 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967055 kubelet[2648]: W0212 19:44:16.963269 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963326 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967055 kubelet[2648]: E0212 19:44:16.963450 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967055 kubelet[2648]: W0212 19:44:16.963459 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963521 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963607 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967476 kubelet[2648]: W0212 19:44:16.963613 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963671 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963755 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967476 kubelet[2648]: W0212 19:44:16.963761 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963820 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.963961 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967476 kubelet[2648]: W0212 19:44:16.963968 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967476 kubelet[2648]: E0212 19:44:16.964029 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964196 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967873 kubelet[2648]: W0212 19:44:16.964203 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964263 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964379 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967873 kubelet[2648]: W0212 19:44:16.964388 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964458 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964601 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.967873 kubelet[2648]: W0212 19:44:16.964610 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964679 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.967873 kubelet[2648]: E0212 19:44:16.964785 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.968272 kubelet[2648]: W0212 19:44:16.964793 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.964860 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.964993 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.968272 kubelet[2648]: W0212 19:44:16.965001 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.965071 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.965193 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.968272 kubelet[2648]: W0212 19:44:16.965202 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.965269 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.968272 kubelet[2648]: E0212 19:44:16.965517 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.968272 kubelet[2648]: W0212 19:44:16.965526 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.965617 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.965873 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.971144 kubelet[2648]: W0212 19:44:16.965902 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.965993 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.966203 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.971144 kubelet[2648]: W0212 19:44:16.966213 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.966347 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.966465 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.971144 kubelet[2648]: W0212 19:44:16.966473 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.971144 kubelet[2648]: E0212 19:44:16.966566 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.966732 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.977665 kubelet[2648]: W0212 19:44:16.966741 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.966810 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.966912 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.977665 kubelet[2648]: W0212 19:44:16.966921 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.967009 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.967768 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.977665 kubelet[2648]: W0212 19:44:16.967779 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.967855 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.977665 kubelet[2648]: E0212 19:44:16.967965 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978044 kubelet[2648]: W0212 19:44:16.967974 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968043 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968186 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978044 kubelet[2648]: W0212 19:44:16.968194 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968211 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968416 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978044 kubelet[2648]: W0212 19:44:16.968426 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968440 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.978044 kubelet[2648]: E0212 19:44:16.968582 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978044 kubelet[2648]: W0212 19:44:16.968589 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978437 kubelet[2648]: E0212 19:44:16.968601 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.978437 kubelet[2648]: E0212 19:44:16.968754 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978437 kubelet[2648]: W0212 19:44:16.968761 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978437 kubelet[2648]: E0212 19:44:16.968773 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:16.978437 kubelet[2648]: E0212 19:44:16.968894 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:16.978437 kubelet[2648]: W0212 19:44:16.968901 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:16.978437 kubelet[2648]: E0212 19:44:16.968912 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043423 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047361 kubelet[2648]: W0212 19:44:17.043442 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043460 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043709 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047361 kubelet[2648]: W0212 19:44:17.043721 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043740 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043944 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047361 kubelet[2648]: W0212 19:44:17.043952 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.043968 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047361 kubelet[2648]: E0212 19:44:17.044144 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047872 kubelet[2648]: W0212 19:44:17.044152 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044167 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044342 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047872 kubelet[2648]: W0212 19:44:17.044350 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044366 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044562 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047872 kubelet[2648]: W0212 19:44:17.044570 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044585 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.047872 kubelet[2648]: E0212 19:44:17.044774 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.047872 kubelet[2648]: W0212 19:44:17.044783 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.044848 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.044993 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048288 kubelet[2648]: W0212 19:44:17.045001 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.045065 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.045164 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048288 kubelet[2648]: W0212 19:44:17.045171 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.045239 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.045349 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048288 kubelet[2648]: W0212 19:44:17.045357 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048288 kubelet[2648]: E0212 19:44:17.045375 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.045565 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048731 kubelet[2648]: W0212 19:44:17.045575 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.045592 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.045797 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048731 kubelet[2648]: W0212 19:44:17.045807 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.045879 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.046033 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.048731 kubelet[2648]: W0212 19:44:17.046042 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.046111 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.048731 kubelet[2648]: E0212 19:44:17.046232 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049128 kubelet[2648]: W0212 19:44:17.046240 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046308 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046435 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049128 kubelet[2648]: W0212 19:44:17.046443 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046516 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046620 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049128 kubelet[2648]: W0212 19:44:17.046628 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046719 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049128 kubelet[2648]: E0212 19:44:17.046824 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049128 kubelet[2648]: W0212 19:44:17.046831 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.046846 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047015 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049426 kubelet[2648]: W0212 19:44:17.047023 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047040 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047210 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049426 kubelet[2648]: W0212 19:44:17.047218 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047235 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047477 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049426 kubelet[2648]: W0212 19:44:17.047486 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049426 kubelet[2648]: E0212 19:44:17.047561 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.047675 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049812 kubelet[2648]: W0212 19:44:17.047683 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.047754 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.047855 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049812 kubelet[2648]: W0212 19:44:17.047863 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.047930 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.048039 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.049812 kubelet[2648]: W0212 19:44:17.048047 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.048112 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.049812 kubelet[2648]: E0212 19:44:17.048314 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.050192 kubelet[2648]: W0212 19:44:17.048326 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.048361 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.048583 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.050192 kubelet[2648]: W0212 19:44:17.048593 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.048610 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.048796 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.050192 kubelet[2648]: W0212 19:44:17.048805 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.048822 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.050192 kubelet[2648]: E0212 19:44:17.049067 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.050192 kubelet[2648]: W0212 19:44:17.049077 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.050584 kubelet[2648]: E0212 19:44:17.049090 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.073948 kubelet[2648]: E0212 19:44:17.073933 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.074150 kubelet[2648]: W0212 19:44:17.074137 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.074262 kubelet[2648]: E0212 19:44:17.074252 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.145708 kubelet[2648]: E0212 19:44:17.145680 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.145708 kubelet[2648]: W0212 19:44:17.145698 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.145960 kubelet[2648]: E0212 19:44:17.145721 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.146010 kubelet[2648]: E0212 19:44:17.145971 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.146010 kubelet[2648]: W0212 19:44:17.145982 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.146010 kubelet[2648]: E0212 19:44:17.146000 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.246782 kubelet[2648]: E0212 19:44:17.246748 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.246782 kubelet[2648]: W0212 19:44:17.246774 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.247001 kubelet[2648]: E0212 19:44:17.246797 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.247053 kubelet[2648]: E0212 19:44:17.247026 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.247053 kubelet[2648]: W0212 19:44:17.247036 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.247053 kubelet[2648]: E0212 19:44:17.247052 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.266994 env[1420]: time="2024-02-12T19:44:17.266953324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75bd9dbb95-tklqs,Uid:654389a3-6fb1-4f24-a647-2f8ebb7c6ed8,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:17.282310 kubelet[2648]: E0212 19:44:17.282076 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.282310 kubelet[2648]: W0212 19:44:17.282090 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.282310 kubelet[2648]: E0212 19:44:17.282110 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.348997 kubelet[2648]: E0212 19:44:17.348897 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.348997 kubelet[2648]: W0212 19:44:17.348923 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.348997 kubelet[2648]: E0212 19:44:17.348949 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.355215 env[1420]: time="2024-02-12T19:44:17.355176358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slrct,Uid:fe74fe8b-83c6-4009-a71c-e6778b511f42,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:17.450481 kubelet[2648]: E0212 19:44:17.450451 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.450481 kubelet[2648]: W0212 19:44:17.450475 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.450670 kubelet[2648]: E0212 19:44:17.450498 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.465028 kubelet[2648]: E0212 19:44:17.465004 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:17.465028 kubelet[2648]: W0212 19:44:17.465025 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:17.465211 kubelet[2648]: E0212 19:44:17.465045 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:17.535490 env[1420]: time="2024-02-12T19:44:17.535407163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:17.535664 env[1420]: time="2024-02-12T19:44:17.535495863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:17.535664 env[1420]: time="2024-02-12T19:44:17.535544664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:17.535785 env[1420]: time="2024-02-12T19:44:17.535751466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789 pid=3170 runtime=io.containerd.runc.v2 Feb 12 19:44:17.623249 env[1420]: time="2024-02-12T19:44:17.623129992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75bd9dbb95-tklqs,Uid:654389a3-6fb1-4f24-a647-2f8ebb7c6ed8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:17.629577 env[1420]: time="2024-02-12T19:44:17.629544853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 12 19:44:17.693381 env[1420]: time="2024-02-12T19:44:17.693297756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:17.693536 env[1420]: time="2024-02-12T19:44:17.693394657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:17.693536 env[1420]: time="2024-02-12T19:44:17.693423757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:17.693652 env[1420]: time="2024-02-12T19:44:17.693574558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f pid=3212 runtime=io.containerd.runc.v2 Feb 12 19:44:17.760067 env[1420]: time="2024-02-12T19:44:17.760021787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slrct,Uid:fe74fe8b-83c6-4009-a71c-e6778b511f42,Namespace:calico-system,Attempt:0,} returns sandbox id \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\"" Feb 12 19:44:17.812000 audit[3270]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:17.812000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc10306370 a2=0 a3=7ffc1030635c items=0 ppid=2844 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:17.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:17.813000 audit[3270]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:17.813000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc10306370 a2=0 a3=7ffc1030635c items=0 ppid=2844 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:17.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:18.063666 kubelet[2648]: E0212 19:44:18.063638 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:19.774938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270430561.mount: Deactivated successfully. Feb 12 19:44:20.062955 kubelet[2648]: E0212 19:44:20.062867 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:22.062948 kubelet[2648]: E0212 19:44:22.062916 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:22.099595 env[1420]: time="2024-02-12T19:44:22.099543798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.108945 env[1420]: time="2024-02-12T19:44:22.108859578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.116209 env[1420]: time="2024-02-12T19:44:22.116120340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.124287 env[1420]: time="2024-02-12T19:44:22.124257709Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.125206 env[1420]: time="2024-02-12T19:44:22.125170717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 12 19:44:22.129104 env[1420]: time="2024-02-12T19:44:22.129007250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 19:44:22.136946 env[1420]: time="2024-02-12T19:44:22.136053111Z" level=info msg="CreateContainer within sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 12 19:44:22.178963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047977892.mount: Deactivated successfully. Feb 12 19:44:22.191117 env[1420]: time="2024-02-12T19:44:22.190869580Z" level=info msg="CreateContainer within sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\"" Feb 12 19:44:22.191961 env[1420]: time="2024-02-12T19:44:22.191930489Z" level=info msg="StartContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\"" Feb 12 19:44:22.271205 env[1420]: time="2024-02-12T19:44:22.271163368Z" level=info msg="StartContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" returns successfully" Feb 12 19:44:23.158258 env[1420]: time="2024-02-12T19:44:23.157241532Z" level=info msg="StopContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" with timeout 300 (s)" Feb 12 19:44:23.158258 env[1420]: time="2024-02-12T19:44:23.157699436Z" level=info msg="Stop container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" with signal terminated" Feb 12 19:44:23.168050 kubelet[2648]: I0212 19:44:23.168021 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-75bd9dbb95-tklqs" podStartSLOduration=-9.223372029686798e+09 pod.CreationTimestamp="2024-02-12 19:44:16 +0000 UTC" firstStartedPulling="2024-02-12 19:44:17.624423504 +0000 UTC m=+23.795251340" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:23.16778782 +0000 UTC m=+29.338615756" watchObservedRunningTime="2024-02-12 19:44:23.167978522 +0000 UTC m=+29.338806358" Feb 12 19:44:23.190308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8-rootfs.mount: Deactivated successfully. Feb 12 19:44:24.063220 kubelet[2648]: E0212 19:44:24.063182 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:24.274032 env[1420]: time="2024-02-12T19:44:24.176391168Z" level=error msg="collecting metrics for 924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8" error="cgroups: cgroup deleted: unknown" Feb 12 19:44:24.326895 env[1420]: time="2024-02-12T19:44:24.326778008Z" level=info msg="shim disconnected" id=924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8 Feb 12 19:44:24.326895 env[1420]: time="2024-02-12T19:44:24.326826709Z" level=warning msg="cleaning up after shim disconnected" id=924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8 namespace=k8s.io Feb 12 19:44:24.326895 env[1420]: time="2024-02-12T19:44:24.326838509Z" level=info msg="cleaning up dead shim" Feb 12 19:44:24.335892 env[1420]: time="2024-02-12T19:44:24.335845683Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3334 runtime=io.containerd.runc.v2\n" Feb 12 19:44:24.340305 env[1420]: time="2024-02-12T19:44:24.340263420Z" level=info msg="StopContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" returns successfully" Feb 12 19:44:24.340898 env[1420]: time="2024-02-12T19:44:24.340868325Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:24.344907 env[1420]: time="2024-02-12T19:44:24.340931025Z" level=info msg="Container to stop \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:44:24.343765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789-shm.mount: Deactivated successfully. Feb 12 19:44:24.373834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789-rootfs.mount: Deactivated successfully. Feb 12 19:44:24.397746 env[1420]: time="2024-02-12T19:44:24.397697893Z" level=info msg="shim disconnected" id=1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789 Feb 12 19:44:24.397746 env[1420]: time="2024-02-12T19:44:24.397745794Z" level=warning msg="cleaning up after shim disconnected" id=1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789 namespace=k8s.io Feb 12 19:44:24.398016 env[1420]: time="2024-02-12T19:44:24.397757994Z" level=info msg="cleaning up dead shim" Feb 12 19:44:24.405038 env[1420]: time="2024-02-12T19:44:24.404993453Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3369 runtime=io.containerd.runc.v2\n" Feb 12 19:44:24.405359 env[1420]: time="2024-02-12T19:44:24.405310056Z" level=info msg="TearDown network for sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" successfully" Feb 12 19:44:24.405359 env[1420]: time="2024-02-12T19:44:24.405345156Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" returns successfully" Feb 12 19:44:24.506974 kubelet[2648]: E0212 19:44:24.506943 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.506974 kubelet[2648]: W0212 19:44:24.506965 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.506974 kubelet[2648]: E0212 19:44:24.506990 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.507774 kubelet[2648]: I0212 19:44:24.507036 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2vgg\" (UniqueName: \"kubernetes.io/projected/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-kube-api-access-c2vgg\") pod \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " Feb 12 19:44:24.507774 kubelet[2648]: E0212 19:44:24.507275 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.507774 kubelet[2648]: W0212 19:44:24.507289 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.507774 kubelet[2648]: E0212 19:44:24.507309 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.507774 kubelet[2648]: I0212 19:44:24.507365 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-tigera-ca-bundle\") pod \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " Feb 12 19:44:24.507774 kubelet[2648]: E0212 19:44:24.507581 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.507774 kubelet[2648]: W0212 19:44:24.507593 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.507774 kubelet[2648]: E0212 19:44:24.507609 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.508221 kubelet[2648]: I0212 19:44:24.507643 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-typha-certs\") pod \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\" (UID: \"654389a3-6fb1-4f24-a647-2f8ebb7c6ed8\") " Feb 12 19:44:24.508221 kubelet[2648]: E0212 19:44:24.507938 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.508221 kubelet[2648]: W0212 19:44:24.507949 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.508221 kubelet[2648]: E0212 19:44:24.507968 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.510767 kubelet[2648]: E0212 19:44:24.509057 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.510767 kubelet[2648]: W0212 19:44:24.509086 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.510767 kubelet[2648]: E0212 19:44:24.509111 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.514312 systemd[1]: var-lib-kubelet-pods-654389a3\x2d6fb1\x2d4f24\x2da647\x2d2f8ebb7c6ed8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 12 19:44:24.515892 kubelet[2648]: I0212 19:44:24.515866 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8" (UID: "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:44:24.520074 systemd[1]: var-lib-kubelet-pods-654389a3\x2d6fb1\x2d4f24\x2da647\x2d2f8ebb7c6ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2vgg.mount: Deactivated successfully. Feb 12 19:44:24.522928 systemd[1]: var-lib-kubelet-pods-654389a3\x2d6fb1\x2d4f24\x2da647\x2d2f8ebb7c6ed8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 12 19:44:24.523689 kubelet[2648]: E0212 19:44:24.523675 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:24.523811 kubelet[2648]: W0212 19:44:24.523797 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:24.523908 kubelet[2648]: E0212 19:44:24.523898 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:24.524156 kubelet[2648]: W0212 19:44:24.524138 2648 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 12 19:44:24.524410 kubelet[2648]: I0212 19:44:24.524385 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8" (UID: "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:44:24.524635 kubelet[2648]: I0212 19:44:24.524596 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-kube-api-access-c2vgg" (OuterVolumeSpecName: "kube-api-access-c2vgg") pod "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8" (UID: "654389a3-6fb1-4f24-a647-2f8ebb7c6ed8"). InnerVolumeSpecName "kube-api-access-c2vgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:44:24.608161 kubelet[2648]: I0212 19:44:24.608072 2648 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-c2vgg\" (UniqueName: \"kubernetes.io/projected/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-kube-api-access-c2vgg\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:24.608378 kubelet[2648]: I0212 19:44:24.608362 2648 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-tigera-ca-bundle\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:24.608484 kubelet[2648]: I0212 19:44:24.608474 2648 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8-typha-certs\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:25.161301 kubelet[2648]: I0212 19:44:25.161259 2648 scope.go:115] "RemoveContainer" containerID="924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8" Feb 12 19:44:25.163856 env[1420]: time="2024-02-12T19:44:25.163793486Z" level=info msg="RemoveContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\"" Feb 12 19:44:25.183523 env[1420]: time="2024-02-12T19:44:25.183482145Z" level=info msg="RemoveContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" returns successfully" Feb 12 19:44:25.183749 kubelet[2648]: I0212 19:44:25.183726 2648 scope.go:115] "RemoveContainer" containerID="924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8" Feb 12 19:44:25.184000 env[1420]: time="2024-02-12T19:44:25.183928549Z" level=error msg="ContainerStatus for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": not found" Feb 12 19:44:25.184137 kubelet[2648]: E0212 19:44:25.184118 2648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": not found" containerID="924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8" Feb 12 19:44:25.184202 kubelet[2648]: I0212 19:44:25.184157 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8} err="failed to get container status \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": not found" Feb 12 19:44:25.192372 kubelet[2648]: I0212 19:44:25.192077 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:25.192372 kubelet[2648]: E0212 19:44:25.192148 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="654389a3-6fb1-4f24-a647-2f8ebb7c6ed8" containerName="calico-typha" Feb 12 19:44:25.192372 kubelet[2648]: I0212 19:44:25.192196 2648 memory_manager.go:346] "RemoveStaleState removing state" podUID="654389a3-6fb1-4f24-a647-2f8ebb7c6ed8" containerName="calico-typha" Feb 12 19:44:25.197501 kubelet[2648]: E0212 19:44:25.197485 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.197631 kubelet[2648]: W0212 19:44:25.197615 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.197732 kubelet[2648]: E0212 19:44:25.197721 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.198019 kubelet[2648]: E0212 19:44:25.198005 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.198118 kubelet[2648]: W0212 19:44:25.198105 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.198198 kubelet[2648]: E0212 19:44:25.198188 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.198460 kubelet[2648]: E0212 19:44:25.198448 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.198558 kubelet[2648]: W0212 19:44:25.198548 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.198628 kubelet[2648]: E0212 19:44:25.198621 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.198868 kubelet[2648]: E0212 19:44:25.198859 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.198949 kubelet[2648]: W0212 19:44:25.198937 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.199027 kubelet[2648]: E0212 19:44:25.199018 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.199258 kubelet[2648]: E0212 19:44:25.199245 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.199373 kubelet[2648]: W0212 19:44:25.199361 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.199453 kubelet[2648]: E0212 19:44:25.199445 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.200644 kubelet[2648]: E0212 19:44:25.200613 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.200756 kubelet[2648]: W0212 19:44:25.200741 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.200845 kubelet[2648]: E0212 19:44:25.200834 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.201159 kubelet[2648]: E0212 19:44:25.201145 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.201268 kubelet[2648]: W0212 19:44:25.201253 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.201373 kubelet[2648]: E0212 19:44:25.201361 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.206954 kubelet[2648]: E0212 19:44:25.206940 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.207063 kubelet[2648]: W0212 19:44:25.207050 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.207146 kubelet[2648]: E0212 19:44:25.207137 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.207396 kubelet[2648]: E0212 19:44:25.207383 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.207502 kubelet[2648]: W0212 19:44:25.207489 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.207586 kubelet[2648]: E0212 19:44:25.207576 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.215451 kubelet[2648]: E0212 19:44:25.215436 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.215571 kubelet[2648]: W0212 19:44:25.215557 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.215650 kubelet[2648]: E0212 19:44:25.215640 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.215753 kubelet[2648]: I0212 19:44:25.215743 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a14349b1-1cb6-41ab-916a-f1a0f9765595-tigera-ca-bundle\") pod \"calico-typha-d6b9f4645-z57bj\" (UID: \"a14349b1-1cb6-41ab-916a-f1a0f9765595\") " pod="calico-system/calico-typha-d6b9f4645-z57bj" Feb 12 19:44:25.216011 kubelet[2648]: E0212 19:44:25.215998 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.216112 kubelet[2648]: W0212 19:44:25.216098 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.216195 kubelet[2648]: E0212 19:44:25.216185 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.216287 kubelet[2648]: I0212 19:44:25.216276 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54jt\" (UniqueName: \"kubernetes.io/projected/a14349b1-1cb6-41ab-916a-f1a0f9765595-kube-api-access-f54jt\") pod \"calico-typha-d6b9f4645-z57bj\" (UID: \"a14349b1-1cb6-41ab-916a-f1a0f9765595\") " pod="calico-system/calico-typha-d6b9f4645-z57bj" Feb 12 19:44:25.216578 kubelet[2648]: E0212 19:44:25.216564 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.216680 kubelet[2648]: W0212 19:44:25.216665 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.216762 kubelet[2648]: E0212 19:44:25.216752 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.216846 kubelet[2648]: I0212 19:44:25.216837 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a14349b1-1cb6-41ab-916a-f1a0f9765595-typha-certs\") pod \"calico-typha-d6b9f4645-z57bj\" (UID: \"a14349b1-1cb6-41ab-916a-f1a0f9765595\") " pod="calico-system/calico-typha-d6b9f4645-z57bj" Feb 12 19:44:25.217121 kubelet[2648]: E0212 19:44:25.217107 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.217220 kubelet[2648]: W0212 19:44:25.217206 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.217312 kubelet[2648]: E0212 19:44:25.217300 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.217657 kubelet[2648]: E0212 19:44:25.217643 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.217771 kubelet[2648]: W0212 19:44:25.217735 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.217886 kubelet[2648]: E0212 19:44:25.217874 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.218162 kubelet[2648]: E0212 19:44:25.218149 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.218266 kubelet[2648]: W0212 19:44:25.218253 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.219960 kubelet[2648]: E0212 19:44:25.219943 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.220277 kubelet[2648]: E0212 19:44:25.220263 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.220491 kubelet[2648]: W0212 19:44:25.220475 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.220599 kubelet[2648]: E0212 19:44:25.220588 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.220937 kubelet[2648]: E0212 19:44:25.220923 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.221058 kubelet[2648]: W0212 19:44:25.221044 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.221179 kubelet[2648]: E0212 19:44:25.221169 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.221502 kubelet[2648]: E0212 19:44:25.221488 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.221617 kubelet[2648]: W0212 19:44:25.221604 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.221728 kubelet[2648]: E0212 19:44:25.221704 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.251000 audit[3448]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=3448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.261418 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 12 19:44:25.261518 kernel: audit: type=1325 audit(1707767065.251:290): table=filter:113 family=2 entries=14 op=nft_register_rule pid=3448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.251000 audit[3448]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffdd4951c90 a2=0 a3=7ffdd4951c7c items=0 ppid=2844 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.291073 kernel: audit: type=1300 audit(1707767065.251:290): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffdd4951c90 a2=0 a3=7ffdd4951c7c items=0 ppid=2844 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.303201 kernel: audit: type=1327 audit(1707767065.251:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.251000 audit[3448]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=3448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.316359 kernel: audit: type=1325 audit(1707767065.251:291): table=nat:114 family=2 entries=20 op=nft_register_rule pid=3448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.251000 audit[3448]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffdd4951c90 a2=0 a3=7ffdd4951c7c items=0 ppid=2844 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.317740 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323557 kubelet[2648]: W0212 19:44:25.317755 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.317773 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.317939 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323557 kubelet[2648]: W0212 19:44:25.317947 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.317959 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.318091 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323557 kubelet[2648]: W0212 19:44:25.318097 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.318108 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323557 kubelet[2648]: E0212 19:44:25.318265 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323869 kubelet[2648]: W0212 19:44:25.318272 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318281 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318416 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323869 kubelet[2648]: W0212 19:44:25.318422 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318433 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318544 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323869 kubelet[2648]: W0212 19:44:25.318550 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318560 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.323869 kubelet[2648]: E0212 19:44:25.318694 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.323869 kubelet[2648]: W0212 19:44:25.318701 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.324161 kubelet[2648]: E0212 19:44:25.318712 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.324161 kubelet[2648]: E0212 19:44:25.322054 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.324161 kubelet[2648]: W0212 19:44:25.322067 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.324161 kubelet[2648]: E0212 19:44:25.322079 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.324515 kubelet[2648]: E0212 19:44:25.324501 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.324608 kubelet[2648]: W0212 19:44:25.324598 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.324684 kubelet[2648]: E0212 19:44:25.324676 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.324930 kubelet[2648]: E0212 19:44:25.324920 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.325008 kubelet[2648]: W0212 19:44:25.324998 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.325074 kubelet[2648]: E0212 19:44:25.325060 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.325295 kubelet[2648]: E0212 19:44:25.325285 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.325688 kubelet[2648]: W0212 19:44:25.325368 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.325795 kubelet[2648]: E0212 19:44:25.325786 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.326034 kubelet[2648]: E0212 19:44:25.326024 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.326115 kubelet[2648]: W0212 19:44:25.326105 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.326179 kubelet[2648]: E0212 19:44:25.326166 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.326388 kubelet[2648]: E0212 19:44:25.326379 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.326460 kubelet[2648]: W0212 19:44:25.326451 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.326528 kubelet[2648]: E0212 19:44:25.326515 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.326783 kubelet[2648]: E0212 19:44:25.326768 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.326857 kubelet[2648]: W0212 19:44:25.326847 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.326909 kubelet[2648]: E0212 19:44:25.326902 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.327131 kubelet[2648]: E0212 19:44:25.327123 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.327210 kubelet[2648]: W0212 19:44:25.327201 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.327262 kubelet[2648]: E0212 19:44:25.327256 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.328022 kubelet[2648]: E0212 19:44:25.328011 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.328117 kubelet[2648]: W0212 19:44:25.328107 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.328182 kubelet[2648]: E0212 19:44:25.328169 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.328597 kubelet[2648]: E0212 19:44:25.328587 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.328687 kubelet[2648]: W0212 19:44:25.328678 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.328746 kubelet[2648]: E0212 19:44:25.328734 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.337846 kernel: audit: type=1300 audit(1707767065.251:291): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffdd4951c90 a2=0 a3=7ffdd4951c7c items=0 ppid=2844 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.361977 kernel: audit: type=1327 audit(1707767065.251:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.362288 kernel: audit: type=1325 audit(1707767065.322:292): table=filter:115 family=2 entries=14 op=nft_register_rule pid=3474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.322000 audit[3474]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=3474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.322000 audit[3474]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff12f0aac0 a2=0 a3=7fff12f0aaac items=0 ppid=2844 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.381322 kubelet[2648]: E0212 19:44:25.381301 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:25.381489 kubelet[2648]: W0212 19:44:25.381473 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:25.381583 kubelet[2648]: E0212 19:44:25.381573 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:25.385560 kernel: audit: type=1300 audit(1707767065.322:292): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff12f0aac0 a2=0 a3=7fff12f0aaac items=0 ppid=2844 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.400099 kernel: audit: type=1327 audit(1707767065.322:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.400174 kernel: audit: type=1325 audit(1707767065.349:293): table=nat:116 family=2 entries=20 op=nft_register_rule pid=3474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.349000 audit[3474]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=3474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:25.406342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579207504.mount: Deactivated successfully. Feb 12 19:44:25.349000 audit[3474]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff12f0aac0 a2=0 a3=7fff12f0aaac items=0 ppid=2844 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:25.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:25.497125 env[1420]: time="2024-02-12T19:44:25.497082383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6b9f4645-z57bj,Uid:a14349b1-1cb6-41ab-916a-f1a0f9765595,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:25.568354 env[1420]: time="2024-02-12T19:44:25.568280959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:25.568561 env[1420]: time="2024-02-12T19:44:25.568534762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:25.568662 env[1420]: time="2024-02-12T19:44:25.568641662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:25.568873 env[1420]: time="2024-02-12T19:44:25.568848164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1568d2b3b8c5e982a86d3218b32482d4a7a4b059597fb18a15d3b3809b3b9ae3 pid=3503 runtime=io.containerd.runc.v2 Feb 12 19:44:25.688940 env[1420]: time="2024-02-12T19:44:25.688884636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6b9f4645-z57bj,Uid:a14349b1-1cb6-41ab-916a-f1a0f9765595,Namespace:calico-system,Attempt:0,} returns sandbox id \"1568d2b3b8c5e982a86d3218b32482d4a7a4b059597fb18a15d3b3809b3b9ae3\"" Feb 12 19:44:25.701892 env[1420]: time="2024-02-12T19:44:25.701855041Z" level=info msg="CreateContainer within sandbox \"1568d2b3b8c5e982a86d3218b32482d4a7a4b059597fb18a15d3b3809b3b9ae3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 12 19:44:25.749299 env[1420]: time="2024-02-12T19:44:25.748847421Z" level=info msg="CreateContainer within sandbox \"1568d2b3b8c5e982a86d3218b32482d4a7a4b059597fb18a15d3b3809b3b9ae3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"26892d3e4b0a74ca968ba050a5ba3416e6ed0d0b6bed068951f26306f295702a\"" Feb 12 19:44:25.749492 env[1420]: time="2024-02-12T19:44:25.749461326Z" level=info msg="StartContainer for \"26892d3e4b0a74ca968ba050a5ba3416e6ed0d0b6bed068951f26306f295702a\"" Feb 12 19:44:25.862835 env[1420]: time="2024-02-12T19:44:25.862782243Z" level=info msg="StartContainer for \"26892d3e4b0a74ca968ba050a5ba3416e6ed0d0b6bed068951f26306f295702a\" returns successfully" Feb 12 19:44:26.063923 kubelet[2648]: E0212 19:44:26.062493 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:26.064646 env[1420]: time="2024-02-12T19:44:26.064603367Z" level=info msg="StopContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" with timeout 1 (s)" Feb 12 19:44:26.064758 env[1420]: time="2024-02-12T19:44:26.064666868Z" level=error msg="StopContainer for \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": not found" Feb 12 19:44:26.065093 kubelet[2648]: E0212 19:44:26.064965 2648 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8\": not found" containerID="924c765e38ed4fe8071fa869d8e22fde8057c959f5411f1877f8dffd1b65e1f8" Feb 12 19:44:26.065382 env[1420]: time="2024-02-12T19:44:26.065357273Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:26.065621 env[1420]: time="2024-02-12T19:44:26.065571475Z" level=info msg="TearDown network for sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" successfully" Feb 12 19:44:26.065739 env[1420]: time="2024-02-12T19:44:26.065718176Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" returns successfully" Feb 12 19:44:26.068124 kubelet[2648]: I0212 19:44:26.068108 2648 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=654389a3-6fb1-4f24-a647-2f8ebb7c6ed8 path="/var/lib/kubelet/pods/654389a3-6fb1-4f24-a647-2f8ebb7c6ed8/volumes" Feb 12 19:44:26.177131 kubelet[2648]: I0212 19:44:26.177102 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d6b9f4645-z57bj" podStartSLOduration=9.177045461 pod.CreationTimestamp="2024-02-12 19:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:26.176707558 +0000 UTC m=+32.347535394" watchObservedRunningTime="2024-02-12 19:44:26.177045461 +0000 UTC m=+32.347873397" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215285 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217002 kubelet[2648]: W0212 19:44:26.215306 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215346 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215540 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217002 kubelet[2648]: W0212 19:44:26.215551 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215566 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215730 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217002 kubelet[2648]: W0212 19:44:26.215740 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215754 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217002 kubelet[2648]: E0212 19:44:26.215946 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217552 kubelet[2648]: W0212 19:44:26.215955 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.215968 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.216113 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217552 kubelet[2648]: W0212 19:44:26.216121 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.216135 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.216273 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217552 kubelet[2648]: W0212 19:44:26.216281 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.216295 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217552 kubelet[2648]: E0212 19:44:26.216534 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217552 kubelet[2648]: W0212 19:44:26.216544 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217945 kubelet[2648]: E0212 19:44:26.216558 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217945 kubelet[2648]: E0212 19:44:26.216700 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217945 kubelet[2648]: W0212 19:44:26.216708 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217945 kubelet[2648]: E0212 19:44:26.216723 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.217945 kubelet[2648]: E0212 19:44:26.216863 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.217945 kubelet[2648]: W0212 19:44:26.216870 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.217945 kubelet[2648]: E0212 19:44:26.216882 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.218669 kubelet[2648]: E0212 19:44:26.218319 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.218669 kubelet[2648]: W0212 19:44:26.218353 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.218669 kubelet[2648]: E0212 19:44:26.218372 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.218669 kubelet[2648]: E0212 19:44:26.218559 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.218669 kubelet[2648]: W0212 19:44:26.218567 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.218669 kubelet[2648]: E0212 19:44:26.218582 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.219113 kubelet[2648]: E0212 19:44:26.219043 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.219113 kubelet[2648]: W0212 19:44:26.219054 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.219113 kubelet[2648]: E0212 19:44:26.219069 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.225179 kubelet[2648]: E0212 19:44:26.225016 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.225179 kubelet[2648]: W0212 19:44:26.225030 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.225179 kubelet[2648]: E0212 19:44:26.225045 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.225599 kubelet[2648]: E0212 19:44:26.225441 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.225599 kubelet[2648]: W0212 19:44:26.225453 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.225599 kubelet[2648]: E0212 19:44:26.225473 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.225983 kubelet[2648]: E0212 19:44:26.225830 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.225983 kubelet[2648]: W0212 19:44:26.225843 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.225983 kubelet[2648]: E0212 19:44:26.225861 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.226303 kubelet[2648]: E0212 19:44:26.226173 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.226303 kubelet[2648]: W0212 19:44:26.226184 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.226303 kubelet[2648]: E0212 19:44:26.226203 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.226659 kubelet[2648]: E0212 19:44:26.226528 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.226659 kubelet[2648]: W0212 19:44:26.226540 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.226659 kubelet[2648]: E0212 19:44:26.226637 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.226975 kubelet[2648]: E0212 19:44:26.226848 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.226975 kubelet[2648]: W0212 19:44:26.226858 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.226975 kubelet[2648]: E0212 19:44:26.226940 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.227257 kubelet[2648]: E0212 19:44:26.227173 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.227257 kubelet[2648]: W0212 19:44:26.227185 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.227404 kubelet[2648]: E0212 19:44:26.227394 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.227595 kubelet[2648]: E0212 19:44:26.227586 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.227675 kubelet[2648]: W0212 19:44:26.227664 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.227749 kubelet[2648]: E0212 19:44:26.227741 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.227985 kubelet[2648]: E0212 19:44:26.227973 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.228090 kubelet[2648]: W0212 19:44:26.228079 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.228184 kubelet[2648]: E0212 19:44:26.228176 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.228471 kubelet[2648]: E0212 19:44:26.228459 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.228596 kubelet[2648]: W0212 19:44:26.228572 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.228699 kubelet[2648]: E0212 19:44:26.228689 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.228951 kubelet[2648]: E0212 19:44:26.228941 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.229037 kubelet[2648]: W0212 19:44:26.229026 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.229113 kubelet[2648]: E0212 19:44:26.229106 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.229547 kubelet[2648]: E0212 19:44:26.229535 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.229688 kubelet[2648]: W0212 19:44:26.229677 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.229764 kubelet[2648]: E0212 19:44:26.229756 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.229999 kubelet[2648]: E0212 19:44:26.229986 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.230097 kubelet[2648]: W0212 19:44:26.230084 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.230179 kubelet[2648]: E0212 19:44:26.230167 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.230448 kubelet[2648]: E0212 19:44:26.230434 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.230575 kubelet[2648]: W0212 19:44:26.230542 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.230575 kubelet[2648]: E0212 19:44:26.230572 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.230790 kubelet[2648]: E0212 19:44:26.230775 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.230790 kubelet[2648]: W0212 19:44:26.230787 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.230903 kubelet[2648]: E0212 19:44:26.230815 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.231302 kubelet[2648]: E0212 19:44:26.231286 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.231302 kubelet[2648]: W0212 19:44:26.231298 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.231473 kubelet[2648]: E0212 19:44:26.231318 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.231626 kubelet[2648]: E0212 19:44:26.231612 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.231683 kubelet[2648]: W0212 19:44:26.231624 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.231683 kubelet[2648]: E0212 19:44:26.231647 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.232025 kubelet[2648]: E0212 19:44:26.232010 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:26.232025 kubelet[2648]: W0212 19:44:26.232024 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:26.232125 kubelet[2648]: E0212 19:44:26.232048 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:26.399000 audit[3639]: NETFILTER_CFG table=filter:117 family=2 entries=13 op=nft_register_rule pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:26.399000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff537b36a0 a2=0 a3=7fff537b368c items=0 ppid=2844 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:26.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:26.400000 audit[3639]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:44:26.400000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fff537b36a0 a2=0 a3=7fff537b368c items=0 ppid=2844 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:26.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:44:27.177909 env[1420]: time="2024-02-12T19:44:27.177867689Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:27.183529 env[1420]: time="2024-02-12T19:44:27.183494233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:27.187585 env[1420]: time="2024-02-12T19:44:27.187553364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:27.191649 env[1420]: time="2024-02-12T19:44:27.191616596Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:27.192448 env[1420]: time="2024-02-12T19:44:27.192416902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 12 19:44:27.196072 env[1420]: time="2024-02-12T19:44:27.196035230Z" level=info msg="CreateContainer within sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:44:27.222145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151616408.mount: Deactivated successfully. Feb 12 19:44:27.225818 kubelet[2648]: E0212 19:44:27.225544 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.225818 kubelet[2648]: W0212 19:44:27.225563 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.225818 kubelet[2648]: E0212 19:44:27.225585 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.226281 kubelet[2648]: E0212 19:44:27.225860 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.226281 kubelet[2648]: W0212 19:44:27.225881 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.226281 kubelet[2648]: E0212 19:44:27.225902 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.226281 kubelet[2648]: E0212 19:44:27.226104 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.226281 kubelet[2648]: W0212 19:44:27.226113 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.226281 kubelet[2648]: E0212 19:44:27.226129 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.226635 kubelet[2648]: E0212 19:44:27.226329 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.226635 kubelet[2648]: W0212 19:44:27.226354 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.226635 kubelet[2648]: E0212 19:44:27.226369 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.226635 kubelet[2648]: E0212 19:44:27.226542 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.226635 kubelet[2648]: W0212 19:44:27.226551 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.226635 kubelet[2648]: E0212 19:44:27.226565 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227068 kubelet[2648]: E0212 19:44:27.226730 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227068 kubelet[2648]: W0212 19:44:27.226739 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227068 kubelet[2648]: E0212 19:44:27.226760 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227068 kubelet[2648]: E0212 19:44:27.226991 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227068 kubelet[2648]: W0212 19:44:27.227003 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227068 kubelet[2648]: E0212 19:44:27.227019 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227377 kubelet[2648]: E0212 19:44:27.227179 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227377 kubelet[2648]: W0212 19:44:27.227188 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227377 kubelet[2648]: E0212 19:44:27.227203 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227545 kubelet[2648]: E0212 19:44:27.227379 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227545 kubelet[2648]: W0212 19:44:27.227389 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227545 kubelet[2648]: E0212 19:44:27.227404 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227670 kubelet[2648]: E0212 19:44:27.227586 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227670 kubelet[2648]: W0212 19:44:27.227595 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227670 kubelet[2648]: E0212 19:44:27.227610 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227816 kubelet[2648]: E0212 19:44:27.227778 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227816 kubelet[2648]: W0212 19:44:27.227787 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.227816 kubelet[2648]: E0212 19:44:27.227801 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.227968 kubelet[2648]: E0212 19:44:27.227950 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.227968 kubelet[2648]: W0212 19:44:27.227963 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.228064 kubelet[2648]: E0212 19:44:27.227976 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.231225 kubelet[2648]: E0212 19:44:27.231208 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.231225 kubelet[2648]: W0212 19:44:27.231221 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.231432 kubelet[2648]: E0212 19:44:27.231237 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.231522 kubelet[2648]: E0212 19:44:27.231507 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.231589 kubelet[2648]: W0212 19:44:27.231525 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.231589 kubelet[2648]: E0212 19:44:27.231546 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.231808 kubelet[2648]: E0212 19:44:27.231792 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.231808 kubelet[2648]: W0212 19:44:27.231805 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.231928 kubelet[2648]: E0212 19:44:27.231824 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.232066 kubelet[2648]: E0212 19:44:27.232051 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.232066 kubelet[2648]: W0212 19:44:27.232063 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.232200 kubelet[2648]: E0212 19:44:27.232083 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.232266 kubelet[2648]: E0212 19:44:27.232257 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.232313 kubelet[2648]: W0212 19:44:27.232271 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.232313 kubelet[2648]: E0212 19:44:27.232290 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.232477 kubelet[2648]: E0212 19:44:27.232460 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.232477 kubelet[2648]: W0212 19:44:27.232473 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.232588 kubelet[2648]: E0212 19:44:27.232489 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.232748 kubelet[2648]: E0212 19:44:27.232731 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.232748 kubelet[2648]: W0212 19:44:27.232744 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.233113 kubelet[2648]: E0212 19:44:27.232914 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.233113 kubelet[2648]: E0212 19:44:27.232917 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.233113 kubelet[2648]: W0212 19:44:27.233027 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.233113 kubelet[2648]: E0212 19:44:27.233079 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.233365 kubelet[2648]: E0212 19:44:27.233203 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.233365 kubelet[2648]: W0212 19:44:27.233213 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.233365 kubelet[2648]: E0212 19:44:27.233232 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.233503 kubelet[2648]: E0212 19:44:27.233411 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.233503 kubelet[2648]: W0212 19:44:27.233421 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.233503 kubelet[2648]: E0212 19:44:27.233434 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.233701 kubelet[2648]: E0212 19:44:27.233686 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.233772 kubelet[2648]: W0212 19:44:27.233702 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.233772 kubelet[2648]: E0212 19:44:27.233720 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.233973 kubelet[2648]: E0212 19:44:27.233943 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.233973 kubelet[2648]: W0212 19:44:27.233959 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.234082 kubelet[2648]: E0212 19:44:27.233978 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.234399 kubelet[2648]: E0212 19:44:27.234383 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.234399 kubelet[2648]: W0212 19:44:27.234395 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.234523 kubelet[2648]: E0212 19:44:27.234482 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.234655 kubelet[2648]: E0212 19:44:27.234640 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.234655 kubelet[2648]: W0212 19:44:27.234651 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.234824 kubelet[2648]: E0212 19:44:27.234787 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.234889 kubelet[2648]: E0212 19:44:27.234843 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.234889 kubelet[2648]: W0212 19:44:27.234851 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.234889 kubelet[2648]: E0212 19:44:27.234864 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.235054 kubelet[2648]: E0212 19:44:27.235036 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.235054 kubelet[2648]: W0212 19:44:27.235049 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.235291 kubelet[2648]: E0212 19:44:27.235063 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.235489 kubelet[2648]: E0212 19:44:27.235474 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.235489 kubelet[2648]: W0212 19:44:27.235486 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.237187 kubelet[2648]: E0212 19:44:27.235501 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.237187 kubelet[2648]: E0212 19:44:27.235926 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:44:27.237187 kubelet[2648]: W0212 19:44:27.235935 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:44:27.237187 kubelet[2648]: E0212 19:44:27.235945 2648 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:44:27.238232 env[1420]: time="2024-02-12T19:44:27.238197659Z" level=info msg="CreateContainer within sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\"" Feb 12 19:44:27.240137 env[1420]: time="2024-02-12T19:44:27.238819264Z" level=info msg="StartContainer for \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\"" Feb 12 19:44:27.306507 env[1420]: time="2024-02-12T19:44:27.306456792Z" level=info msg="StartContainer for \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\" returns successfully" Feb 12 19:44:27.415354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078-rootfs.mount: Deactivated successfully. Feb 12 19:44:27.649057 env[1420]: time="2024-02-12T19:44:27.648918465Z" level=info msg="shim disconnected" id=3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078 Feb 12 19:44:27.649057 env[1420]: time="2024-02-12T19:44:27.648977765Z" level=warning msg="cleaning up after shim disconnected" id=3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078 namespace=k8s.io Feb 12 19:44:27.649057 env[1420]: time="2024-02-12T19:44:27.648989665Z" level=info msg="cleaning up dead shim" Feb 12 19:44:27.658410 env[1420]: time="2024-02-12T19:44:27.658372138Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3714 runtime=io.containerd.runc.v2\n" Feb 12 19:44:28.062664 kubelet[2648]: E0212 19:44:28.062621 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:28.170951 env[1420]: time="2024-02-12T19:44:28.170897115Z" level=info msg="StopPodSandbox for \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\"" Feb 12 19:44:28.171228 env[1420]: time="2024-02-12T19:44:28.170963715Z" level=info msg="Container to stop \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:44:28.177470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f-shm.mount: Deactivated successfully. Feb 12 19:44:28.213546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f-rootfs.mount: Deactivated successfully. Feb 12 19:44:28.227841 env[1420]: time="2024-02-12T19:44:28.227791851Z" level=info msg="shim disconnected" id=d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f Feb 12 19:44:28.228277 env[1420]: time="2024-02-12T19:44:28.227845151Z" level=warning msg="cleaning up after shim disconnected" id=d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f namespace=k8s.io Feb 12 19:44:28.228277 env[1420]: time="2024-02-12T19:44:28.227857251Z" level=info msg="cleaning up dead shim" Feb 12 19:44:28.235711 env[1420]: time="2024-02-12T19:44:28.235658711Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" Feb 12 19:44:28.236217 env[1420]: time="2024-02-12T19:44:28.236186315Z" level=info msg="TearDown network for sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" successfully" Feb 12 19:44:28.236365 env[1420]: time="2024-02-12T19:44:28.236326416Z" level=info msg="StopPodSandbox for \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" returns successfully" Feb 12 19:44:28.439820 kubelet[2648]: I0212 19:44:28.439758 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fe74fe8b-83c6-4009-a71c-e6778b511f42-node-certs\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.439898 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-xtables-lock\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.439986 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-flexvol-driver-host\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.440073 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-log-dir\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.440137 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-policysync\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.440168 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-lib-modules\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440589 kubelet[2648]: I0212 19:44:28.440198 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-bin-dir\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440237 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-net-dir\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440277 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-run-calico\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440315 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n94mz\" (UniqueName: \"kubernetes.io/projected/fe74fe8b-83c6-4009-a71c-e6778b511f42-kube-api-access-n94mz\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440371 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe74fe8b-83c6-4009-a71c-e6778b511f42-tigera-ca-bundle\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440407 2648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-lib-calico\") pod \"fe74fe8b-83c6-4009-a71c-e6778b511f42\" (UID: \"fe74fe8b-83c6-4009-a71c-e6778b511f42\") " Feb 12 19:44:28.440941 kubelet[2648]: I0212 19:44:28.440471 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441278 kubelet[2648]: I0212 19:44:28.440516 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441278 kubelet[2648]: I0212 19:44:28.440541 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441278 kubelet[2648]: I0212 19:44:28.440568 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441278 kubelet[2648]: I0212 19:44:28.440592 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-policysync" (OuterVolumeSpecName: "policysync") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441278 kubelet[2648]: I0212 19:44:28.440615 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441585 kubelet[2648]: I0212 19:44:28.440638 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441585 kubelet[2648]: I0212 19:44:28.440663 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.441585 kubelet[2648]: I0212 19:44:28.440688 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:44:28.442252 kubelet[2648]: W0212 19:44:28.442209 2648 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fe74fe8b-83c6-4009-a71c-e6778b511f42/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 12 19:44:28.442706 kubelet[2648]: I0212 19:44:28.442674 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74fe8b-83c6-4009-a71c-e6778b511f42-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:44:28.448753 systemd[1]: var-lib-kubelet-pods-fe74fe8b\x2d83c6\x2d4009\x2da71c\x2de6778b511f42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn94mz.mount: Deactivated successfully. Feb 12 19:44:28.449897 kubelet[2648]: I0212 19:44:28.449544 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe74fe8b-83c6-4009-a71c-e6778b511f42-kube-api-access-n94mz" (OuterVolumeSpecName: "kube-api-access-n94mz") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "kube-api-access-n94mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:44:28.453162 systemd[1]: var-lib-kubelet-pods-fe74fe8b\x2d83c6\x2d4009\x2da71c\x2de6778b511f42-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 12 19:44:28.453919 kubelet[2648]: I0212 19:44:28.453874 2648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe74fe8b-83c6-4009-a71c-e6778b511f42-node-certs" (OuterVolumeSpecName: "node-certs") pod "fe74fe8b-83c6-4009-a71c-e6778b511f42" (UID: "fe74fe8b-83c6-4009-a71c-e6778b511f42"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:44:28.541324 kubelet[2648]: I0212 19:44:28.541287 2648 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-run-calico\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541324 kubelet[2648]: I0212 19:44:28.541326 2648 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-n94mz\" (UniqueName: \"kubernetes.io/projected/fe74fe8b-83c6-4009-a71c-e6778b511f42-kube-api-access-n94mz\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541364 2648 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe74fe8b-83c6-4009-a71c-e6778b511f42-tigera-ca-bundle\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541382 2648 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-var-lib-calico\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541397 2648 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-xtables-lock\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541413 2648 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-flexvol-driver-host\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541429 2648 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fe74fe8b-83c6-4009-a71c-e6778b511f42-node-certs\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541447 2648 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-policysync\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541464 2648 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-log-dir\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541557 kubelet[2648]: I0212 19:44:28.541480 2648 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-lib-modules\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541824 kubelet[2648]: I0212 19:44:28.541495 2648 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-net-dir\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:28.541824 kubelet[2648]: I0212 19:44:28.541516 2648 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fe74fe8b-83c6-4009-a71c-e6778b511f42-cni-bin-dir\") on node \"ci-3510.3.2-a-c8dbf10a06\" DevicePath \"\"" Feb 12 19:44:29.172684 kubelet[2648]: I0212 19:44:29.172657 2648 scope.go:115] "RemoveContainer" containerID="3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078" Feb 12 19:44:29.176151 env[1420]: time="2024-02-12T19:44:29.176104597Z" level=info msg="RemoveContainer for \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\"" Feb 12 19:44:29.191592 env[1420]: time="2024-02-12T19:44:29.191547713Z" level=info msg="RemoveContainer for \"3e4b2995d3bb8e21454a444a0567dcddb98e1b9c38742c0a0256d9c0b0d7d078\" returns successfully" Feb 12 19:44:29.204684 kubelet[2648]: I0212 19:44:29.204650 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:29.204818 kubelet[2648]: E0212 19:44:29.204718 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe74fe8b-83c6-4009-a71c-e6778b511f42" containerName="flexvol-driver" Feb 12 19:44:29.204818 kubelet[2648]: I0212 19:44:29.204751 2648 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe74fe8b-83c6-4009-a71c-e6778b511f42" containerName="flexvol-driver" Feb 12 19:44:29.245663 kubelet[2648]: I0212 19:44:29.245640 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-tigera-ca-bundle\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.245854 kubelet[2648]: I0212 19:44:29.245837 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-flexvol-driver-host\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.245940 kubelet[2648]: I0212 19:44:29.245877 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-xtables-lock\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.245940 kubelet[2648]: I0212 19:44:29.245909 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-policysync\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.245940 kubelet[2648]: I0212 19:44:29.245939 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-var-run-calico\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246081 kubelet[2648]: I0212 19:44:29.245967 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-cni-bin-dir\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246081 kubelet[2648]: I0212 19:44:29.245996 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-cni-net-dir\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246081 kubelet[2648]: I0212 19:44:29.246027 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-lib-modules\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246081 kubelet[2648]: I0212 19:44:29.246056 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-cni-log-dir\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246244 kubelet[2648]: I0212 19:44:29.246089 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-node-certs\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246244 kubelet[2648]: I0212 19:44:29.246121 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-var-lib-calico\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.246244 kubelet[2648]: I0212 19:44:29.246151 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlrb\" (UniqueName: \"kubernetes.io/projected/e9ff07f7-8b86-422f-b5cc-eb5810d33d0f-kube-api-access-zqlrb\") pod \"calico-node-pq9x8\" (UID: \"e9ff07f7-8b86-422f-b5cc-eb5810d33d0f\") " pod="calico-system/calico-node-pq9x8" Feb 12 19:44:29.510039 env[1420]: time="2024-02-12T19:44:29.509288506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pq9x8,Uid:e9ff07f7-8b86-422f-b5cc-eb5810d33d0f,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:29.734880 env[1420]: time="2024-02-12T19:44:29.734658704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:29.734880 env[1420]: time="2024-02-12T19:44:29.734704404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:29.734880 env[1420]: time="2024-02-12T19:44:29.734720804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:29.735583 env[1420]: time="2024-02-12T19:44:29.735502010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160 pid=3779 runtime=io.containerd.runc.v2 Feb 12 19:44:29.782628 env[1420]: time="2024-02-12T19:44:29.782522064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pq9x8,Uid:e9ff07f7-8b86-422f-b5cc-eb5810d33d0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\"" Feb 12 19:44:29.786037 env[1420]: time="2024-02-12T19:44:29.786003591Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:44:30.062510 kubelet[2648]: E0212 19:44:30.062384 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:30.068071 kubelet[2648]: I0212 19:44:30.068027 2648 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fe74fe8b-83c6-4009-a71c-e6778b511f42 path="/var/lib/kubelet/pods/fe74fe8b-83c6-4009-a71c-e6778b511f42/volumes" Feb 12 19:44:30.069185 env[1420]: time="2024-02-12T19:44:30.069141514Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e4ca6d684438c3c5e3029da07606f3cde2e7e32183b5e025c12bb1a3b756cd3\"" Feb 12 19:44:30.070569 env[1420]: time="2024-02-12T19:44:30.069537817Z" level=info msg="StartContainer for \"6e4ca6d684438c3c5e3029da07606f3cde2e7e32183b5e025c12bb1a3b756cd3\"" Feb 12 19:44:30.129686 env[1420]: time="2024-02-12T19:44:30.129561961Z" level=info msg="StartContainer for \"6e4ca6d684438c3c5e3029da07606f3cde2e7e32183b5e025c12bb1a3b756cd3\" returns successfully" Feb 12 19:44:31.725473 env[1420]: time="2024-02-12T19:44:31.725321282Z" level=info msg="shim disconnected" id=6e4ca6d684438c3c5e3029da07606f3cde2e7e32183b5e025c12bb1a3b756cd3 Feb 12 19:44:31.725473 env[1420]: time="2024-02-12T19:44:31.725404182Z" level=warning msg="cleaning up after shim disconnected" id=6e4ca6d684438c3c5e3029da07606f3cde2e7e32183b5e025c12bb1a3b756cd3 namespace=k8s.io Feb 12 19:44:31.725473 env[1420]: time="2024-02-12T19:44:31.725415983Z" level=info msg="cleaning up dead shim" Feb 12 19:44:31.733487 env[1420]: time="2024-02-12T19:44:31.733450041Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3871 runtime=io.containerd.runc.v2\n" Feb 12 19:44:32.062442 kubelet[2648]: E0212 19:44:32.062291 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:32.182763 env[1420]: time="2024-02-12T19:44:32.182295785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 19:44:34.062288 kubelet[2648]: E0212 19:44:34.062250 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:34.784262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330578710.mount: Deactivated successfully. Feb 12 19:44:36.062301 kubelet[2648]: E0212 19:44:36.062265 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:38.063092 kubelet[2648]: E0212 19:44:38.063045 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:40.062111 kubelet[2648]: E0212 19:44:40.062081 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:42.062462 kubelet[2648]: E0212 19:44:42.062412 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:44.063677 kubelet[2648]: E0212 19:44:44.063640 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:45.804578 env[1420]: time="2024-02-12T19:44:45.804527892Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.811587 env[1420]: time="2024-02-12T19:44:45.811553333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.814406 env[1420]: time="2024-02-12T19:44:45.814378550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.819518 env[1420]: time="2024-02-12T19:44:45.818845876Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.819518 env[1420]: time="2024-02-12T19:44:45.819228278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 12 19:44:45.822860 env[1420]: time="2024-02-12T19:44:45.822073995Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:44:45.859859 env[1420]: time="2024-02-12T19:44:45.859818217Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4\"" Feb 12 19:44:45.861501 env[1420]: time="2024-02-12T19:44:45.861471226Z" level=info msg="StartContainer for \"d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4\"" Feb 12 19:44:45.900299 systemd[1]: run-containerd-runc-k8s.io-d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4-runc.lbVBQq.mount: Deactivated successfully. Feb 12 19:44:45.937724 env[1420]: time="2024-02-12T19:44:45.937669874Z" level=info msg="StartContainer for \"d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4\" returns successfully" Feb 12 19:44:46.062733 kubelet[2648]: E0212 19:44:46.061800 2648 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:47.668107 env[1420]: time="2024-02-12T19:44:47.668040264Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:44:47.688199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4-rootfs.mount: Deactivated successfully. Feb 12 19:44:47.752481 kubelet[2648]: I0212 19:44:47.752451 2648 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:44:47.778097 kubelet[2648]: I0212 19:44:47.778047 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:47.786089 kubelet[2648]: I0212 19:44:47.786058 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f0b1352-378c-4dae-bd2a-c8486e2500ed-config-volume\") pod \"coredns-787d4945fb-ws7rt\" (UID: \"4f0b1352-378c-4dae-bd2a-c8486e2500ed\") " pod="kube-system/coredns-787d4945fb-ws7rt" Feb 12 19:44:47.786262 kubelet[2648]: I0212 19:44:47.786244 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2cn\" (UniqueName: \"kubernetes.io/projected/4f0b1352-378c-4dae-bd2a-c8486e2500ed-kube-api-access-2p2cn\") pod \"coredns-787d4945fb-ws7rt\" (UID: \"4f0b1352-378c-4dae-bd2a-c8486e2500ed\") " pod="kube-system/coredns-787d4945fb-ws7rt" Feb 12 19:44:47.800006 kubelet[2648]: I0212 19:44:47.799983 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:47.802028 kubelet[2648]: I0212 19:44:47.801847 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:47.887186 kubelet[2648]: I0212 19:44:47.887149 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknzr\" (UniqueName: \"kubernetes.io/projected/57d1cdf4-1214-4bf4-ab2d-b4fcd203788b-kube-api-access-rknzr\") pod \"coredns-787d4945fb-j8xlt\" (UID: \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\") " pod="kube-system/coredns-787d4945fb-j8xlt" Feb 12 19:44:47.887405 kubelet[2648]: I0212 19:44:47.887212 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fcdb66b-2bf1-45a1-90f0-a496bd670686-tigera-ca-bundle\") pod \"calico-kube-controllers-6c6846549b-nwjfr\" (UID: \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\") " pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" Feb 12 19:44:47.887405 kubelet[2648]: I0212 19:44:47.887247 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvqj\" (UniqueName: \"kubernetes.io/projected/7fcdb66b-2bf1-45a1-90f0-a496bd670686-kube-api-access-trvqj\") pod \"calico-kube-controllers-6c6846549b-nwjfr\" (UID: \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\") " pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" Feb 12 19:44:47.887405 kubelet[2648]: I0212 19:44:47.887345 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57d1cdf4-1214-4bf4-ab2d-b4fcd203788b-config-volume\") pod \"coredns-787d4945fb-j8xlt\" (UID: \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\") " pod="kube-system/coredns-787d4945fb-j8xlt" Feb 12 19:44:48.066630 env[1420]: time="2024-02-12T19:44:48.066162038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z7w9,Uid:7deb942c-192b-42b5-8511-5e8ab4d0a3b5,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:48.082404 env[1420]: time="2024-02-12T19:44:48.082166229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ws7rt,Uid:4f0b1352-378c-4dae-bd2a-c8486e2500ed,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:48.105281 env[1420]: time="2024-02-12T19:44:48.105007358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j8xlt,Uid:57d1cdf4-1214-4bf4-ab2d-b4fcd203788b,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:48.107219 env[1420]: time="2024-02-12T19:44:48.107180970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6846549b-nwjfr,Uid:7fcdb66b-2bf1-45a1-90f0-a496bd670686,Namespace:calico-system,Attempt:0,}" Feb 12 19:44:52.677824 env[1420]: time="2024-02-12T19:44:52.677756496Z" level=info msg="shim disconnected" id=d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4 Feb 12 19:44:52.677824 env[1420]: time="2024-02-12T19:44:52.677824696Z" level=warning msg="cleaning up after shim disconnected" id=d0cb1eb0c3dee597d70b014dc68edb0fb935cbc89d1881a6e6629421019b56b4 namespace=k8s.io Feb 12 19:44:52.677824 env[1420]: time="2024-02-12T19:44:52.677838896Z" level=info msg="cleaning up dead shim" Feb 12 19:44:52.686280 env[1420]: time="2024-02-12T19:44:52.686241442Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3941 runtime=io.containerd.runc.v2\n" Feb 12 19:44:53.222108 env[1420]: time="2024-02-12T19:44:53.222062910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 19:44:53.678825 env[1420]: time="2024-02-12T19:44:53.678771939Z" level=error msg="Failed to destroy network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.681828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217-shm.mount: Deactivated successfully. Feb 12 19:44:53.683052 env[1420]: time="2024-02-12T19:44:53.683006061Z" level=error msg="encountered an error cleaning up failed sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.683142 env[1420]: time="2024-02-12T19:44:53.683070661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z7w9,Uid:7deb942c-192b-42b5-8511-5e8ab4d0a3b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.683372 kubelet[2648]: E0212 19:44:53.683328 2648 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.683729 kubelet[2648]: E0212 19:44:53.683419 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:53.683729 kubelet[2648]: E0212 19:44:53.683452 2648 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z7w9" Feb 12 19:44:53.683729 kubelet[2648]: E0212 19:44:53.683541 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8z7w9_calico-system(7deb942c-192b-42b5-8511-5e8ab4d0a3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8z7w9_calico-system(7deb942c-192b-42b5-8511-5e8ab4d0a3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:53.865536 env[1420]: time="2024-02-12T19:44:53.865478931Z" level=error msg="Failed to destroy network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.865874 env[1420]: time="2024-02-12T19:44:53.865833733Z" level=error msg="encountered an error cleaning up failed sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.865964 env[1420]: time="2024-02-12T19:44:53.865908234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ws7rt,Uid:4f0b1352-378c-4dae-bd2a-c8486e2500ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.866184 kubelet[2648]: E0212 19:44:53.866161 2648 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.866273 kubelet[2648]: E0212 19:44:53.866242 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ws7rt" Feb 12 19:44:53.866329 kubelet[2648]: E0212 19:44:53.866274 2648 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ws7rt" Feb 12 19:44:53.867413 kubelet[2648]: E0212 19:44:53.866412 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-ws7rt_kube-system(4f0b1352-378c-4dae-bd2a-c8486e2500ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-ws7rt_kube-system(4f0b1352-378c-4dae-bd2a-c8486e2500ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ws7rt" podUID=4f0b1352-378c-4dae-bd2a-c8486e2500ed Feb 12 19:44:53.913874 env[1420]: time="2024-02-12T19:44:53.913812988Z" level=error msg="Failed to destroy network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.914186 env[1420]: time="2024-02-12T19:44:53.914149990Z" level=error msg="encountered an error cleaning up failed sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.914291 env[1420]: time="2024-02-12T19:44:53.914204790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j8xlt,Uid:57d1cdf4-1214-4bf4-ab2d-b4fcd203788b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.914506 kubelet[2648]: E0212 19:44:53.914466 2648 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.914614 kubelet[2648]: E0212 19:44:53.914541 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-j8xlt" Feb 12 19:44:53.914614 kubelet[2648]: E0212 19:44:53.914594 2648 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-j8xlt" Feb 12 19:44:53.914710 kubelet[2648]: E0212 19:44:53.914676 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-j8xlt_kube-system(57d1cdf4-1214-4bf4-ab2d-b4fcd203788b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-j8xlt_kube-system(57d1cdf4-1214-4bf4-ab2d-b4fcd203788b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-j8xlt" podUID=57d1cdf4-1214-4bf4-ab2d-b4fcd203788b Feb 12 19:44:53.962556 env[1420]: time="2024-02-12T19:44:53.962448747Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:53.963801 env[1420]: time="2024-02-12T19:44:53.963734054Z" level=info msg="TearDown network for sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" successfully" Feb 12 19:44:53.963801 env[1420]: time="2024-02-12T19:44:53.963794054Z" level=info msg="StopPodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" returns successfully" Feb 12 19:44:53.964162 env[1420]: time="2024-02-12T19:44:53.964135056Z" level=info msg="RemovePodSandbox for \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:53.964247 env[1420]: time="2024-02-12T19:44:53.964168156Z" level=info msg="Forcibly stopping sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\"" Feb 12 19:44:53.964292 env[1420]: time="2024-02-12T19:44:53.964251557Z" level=info msg="TearDown network for sandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" successfully" Feb 12 19:44:53.973745 env[1420]: time="2024-02-12T19:44:53.973701707Z" level=error msg="Failed to destroy network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.974000 env[1420]: time="2024-02-12T19:44:53.973966508Z" level=error msg="encountered an error cleaning up failed sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.974073 env[1420]: time="2024-02-12T19:44:53.974016608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6846549b-nwjfr,Uid:7fcdb66b-2bf1-45a1-90f0-a496bd670686,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.974264 kubelet[2648]: E0212 19:44:53.974239 2648 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:53.974377 kubelet[2648]: E0212 19:44:53.974297 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" Feb 12 19:44:53.974377 kubelet[2648]: E0212 19:44:53.974327 2648 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" Feb 12 19:44:53.974496 kubelet[2648]: E0212 19:44:53.974406 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c6846549b-nwjfr_calico-system(7fcdb66b-2bf1-45a1-90f0-a496bd670686)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c6846549b-nwjfr_calico-system(7fcdb66b-2bf1-45a1-90f0-a496bd670686)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" podUID=7fcdb66b-2bf1-45a1-90f0-a496bd670686 Feb 12 19:44:54.028395 env[1420]: time="2024-02-12T19:44:54.028275095Z" level=info msg="RemovePodSandbox \"1dbd7e96b14bcf898055379852a3ba088d625970c20bfc6ef4125e5729fef789\" returns successfully" Feb 12 19:44:54.029189 env[1420]: time="2024-02-12T19:44:54.029152700Z" level=info msg="StopPodSandbox for \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\"" Feb 12 19:44:54.029310 env[1420]: time="2024-02-12T19:44:54.029256800Z" level=info msg="TearDown network for sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" successfully" Feb 12 19:44:54.029415 env[1420]: time="2024-02-12T19:44:54.029308701Z" level=info msg="StopPodSandbox for \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" returns successfully" Feb 12 19:44:54.029708 env[1420]: time="2024-02-12T19:44:54.029678603Z" level=info msg="RemovePodSandbox for \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\"" Feb 12 19:44:54.029800 env[1420]: time="2024-02-12T19:44:54.029709903Z" level=info msg="Forcibly stopping sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\"" Feb 12 19:44:54.029859 env[1420]: time="2024-02-12T19:44:54.029808903Z" level=info msg="TearDown network for sandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" successfully" Feb 12 19:44:54.224294 env[1420]: time="2024-02-12T19:44:54.223461421Z" level=info msg="RemovePodSandbox \"d42f87ad52f0d2a6dc8f1aab45d6f54d3d0e3137a9401e0f7b6729417cd1fd1f\" returns successfully" Feb 12 19:44:54.224621 kubelet[2648]: I0212 19:44:54.224595 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:44:54.225558 env[1420]: time="2024-02-12T19:44:54.225521832Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:44:54.226296 kubelet[2648]: I0212 19:44:54.226261 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:44:54.227011 env[1420]: time="2024-02-12T19:44:54.226972840Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:44:54.229700 kubelet[2648]: I0212 19:44:54.228863 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:44:54.229793 env[1420]: time="2024-02-12T19:44:54.229383252Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:44:54.230814 kubelet[2648]: I0212 19:44:54.230598 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:44:54.232704 env[1420]: time="2024-02-12T19:44:54.231501564Z" level=info msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" Feb 12 19:44:54.303549 env[1420]: time="2024-02-12T19:44:54.303489442Z" level=error msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" failed" error="failed to destroy network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:54.304015 kubelet[2648]: E0212 19:44:54.303989 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:44:54.304144 kubelet[2648]: E0212 19:44:54.304044 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217} Feb 12 19:44:54.304144 kubelet[2648]: E0212 19:44:54.304095 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:44:54.304144 kubelet[2648]: E0212 19:44:54.304131 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:44:54.305999 env[1420]: time="2024-02-12T19:44:54.305948355Z" level=error msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" failed" error="failed to destroy network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:54.306318 kubelet[2648]: E0212 19:44:54.306299 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:44:54.306437 kubelet[2648]: E0212 19:44:54.306355 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68} Feb 12 19:44:54.306437 kubelet[2648]: E0212 19:44:54.306418 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:44:54.306567 kubelet[2648]: E0212 19:44:54.306454 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" podUID=7fcdb66b-2bf1-45a1-90f0-a496bd670686 Feb 12 19:44:54.317646 env[1420]: time="2024-02-12T19:44:54.317601316Z" level=error msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" failed" error="failed to destroy network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:54.317987 kubelet[2648]: E0212 19:44:54.317813 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:44:54.317987 kubelet[2648]: E0212 19:44:54.317874 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5} Feb 12 19:44:54.317987 kubelet[2648]: E0212 19:44:54.317926 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:44:54.317987 kubelet[2648]: E0212 19:44:54.317965 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-j8xlt" podUID=57d1cdf4-1214-4bf4-ab2d-b4fcd203788b Feb 12 19:44:54.318319 env[1420]: time="2024-02-12T19:44:54.318276820Z" level=error msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" failed" error="failed to destroy network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:44:54.318688 kubelet[2648]: E0212 19:44:54.318546 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:44:54.318688 kubelet[2648]: E0212 19:44:54.318578 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5} Feb 12 19:44:54.318688 kubelet[2648]: E0212 19:44:54.318619 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f0b1352-378c-4dae-bd2a-c8486e2500ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:44:54.318688 kubelet[2648]: E0212 19:44:54.318651 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f0b1352-378c-4dae-bd2a-c8486e2500ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ws7rt" podUID=4f0b1352-378c-4dae-bd2a-c8486e2500ed Feb 12 19:44:54.436713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5-shm.mount: Deactivated successfully. Feb 12 19:45:06.065965 env[1420]: time="2024-02-12T19:45:06.065920637Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:45:06.115501 env[1420]: time="2024-02-12T19:45:06.115446168Z" level=error msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" failed" error="failed to destroy network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:45:06.115907 kubelet[2648]: E0212 19:45:06.115753 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:06.115907 kubelet[2648]: E0212 19:45:06.115792 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5} Feb 12 19:45:06.115907 kubelet[2648]: E0212 19:45:06.115837 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:45:06.115907 kubelet[2648]: E0212 19:45:06.115874 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-j8xlt" podUID=57d1cdf4-1214-4bf4-ab2d-b4fcd203788b Feb 12 19:45:06.587676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971624818.mount: Deactivated successfully. Feb 12 19:45:06.724552 env[1420]: time="2024-02-12T19:45:06.724503108Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:06.737387 env[1420]: time="2024-02-12T19:45:06.737344867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:06.741723 env[1420]: time="2024-02-12T19:45:06.741686788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:06.747988 env[1420]: time="2024-02-12T19:45:06.747956017Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:06.748376 env[1420]: time="2024-02-12T19:45:06.748325019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 12 19:45:06.765506 env[1420]: time="2024-02-12T19:45:06.757878463Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 19:45:06.850477 env[1420]: time="2024-02-12T19:45:06.850371694Z" level=info msg="CreateContainer within sandbox \"a2d1f3644c704ebe8860a8e2f1be025fa48e29f5c5f93b8471ab7e3254ad7160\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0\"" Feb 12 19:45:06.851241 env[1420]: time="2024-02-12T19:45:06.851202198Z" level=info msg="StartContainer for \"87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0\"" Feb 12 19:45:06.912320 env[1420]: time="2024-02-12T19:45:06.912248183Z" level=info msg="StartContainer for \"87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0\" returns successfully" Feb 12 19:45:07.064667 env[1420]: time="2024-02-12T19:45:07.064596591Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:45:07.065314 env[1420]: time="2024-02-12T19:45:07.064693191Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:45:07.117303 env[1420]: time="2024-02-12T19:45:07.117181134Z" level=error msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" failed" error="failed to destroy network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:45:07.118115 kubelet[2648]: E0212 19:45:07.118091 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:07.120182 kubelet[2648]: E0212 19:45:07.120156 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68} Feb 12 19:45:07.120381 kubelet[2648]: E0212 19:45:07.120213 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:45:07.120381 kubelet[2648]: E0212 19:45:07.120250 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fcdb66b-2bf1-45a1-90f0-a496bd670686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" podUID=7fcdb66b-2bf1-45a1-90f0-a496bd670686 Feb 12 19:45:07.124118 env[1420]: time="2024-02-12T19:45:07.124073165Z" level=error msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" failed" error="failed to destroy network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:45:07.125356 kubelet[2648]: E0212 19:45:07.124394 2648 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:07.125356 kubelet[2648]: E0212 19:45:07.124430 2648 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217} Feb 12 19:45:07.125356 kubelet[2648]: E0212 19:45:07.124458 2648 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:45:07.125356 kubelet[2648]: E0212 19:45:07.124481 2648 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7deb942c-192b-42b5-8511-5e8ab4d0a3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z7w9" podUID=7deb942c-192b-42b5-8511-5e8ab4d0a3b5 Feb 12 19:45:07.199251 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 19:45:07.199423 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 19:45:08.063636 env[1420]: time="2024-02-12T19:45:08.063585505Z" level=info msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" Feb 12 19:45:08.102628 kubelet[2648]: I0212 19:45:08.102087 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pq9x8" podStartSLOduration=-9.223371997752733e+09 pod.CreationTimestamp="2024-02-12 19:44:29 +0000 UTC" firstStartedPulling="2024-02-12 19:44:32.181935783 +0000 UTC m=+38.352763719" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:07.29157524 +0000 UTC m=+73.462403176" watchObservedRunningTime="2024-02-12 19:45:08.102042381 +0000 UTC m=+74.272870217" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.103 [INFO][4330] k8s.go 578: Cleaning up netns ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.103 [INFO][4330] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" iface="eth0" netns="/var/run/netns/cni-e69f94b4-5b83-ff1b-44f7-4d943aaa5943" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.104 [INFO][4330] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" iface="eth0" netns="/var/run/netns/cni-e69f94b4-5b83-ff1b-44f7-4d943aaa5943" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.105 [INFO][4330] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" iface="eth0" netns="/var/run/netns/cni-e69f94b4-5b83-ff1b-44f7-4d943aaa5943" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.105 [INFO][4330] k8s.go 585: Releasing IP address(es) ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.105 [INFO][4330] utils.go 188: Calico CNI releasing IP address ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.129 [INFO][4337] ipam_plugin.go 415: Releasing address using handleID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.129 [INFO][4337] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.129 [INFO][4337] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.134 [WARNING][4337] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.134 [INFO][4337] ipam_plugin.go 443: Releasing address using workloadID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.136 [INFO][4337] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:08.137849 env[1420]: 2024-02-12 19:45:08.136 [INFO][4330] k8s.go 591: Teardown processing complete. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:08.143489 env[1420]: time="2024-02-12T19:45:08.139221752Z" level=info msg="TearDown network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" successfully" Feb 12 19:45:08.143489 env[1420]: time="2024-02-12T19:45:08.139268552Z" level=info msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" returns successfully" Feb 12 19:45:08.143489 env[1420]: time="2024-02-12T19:45:08.139906155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ws7rt,Uid:4f0b1352-378c-4dae-bd2a-c8486e2500ed,Namespace:kube-system,Attempt:1,}" Feb 12 19:45:08.142651 systemd[1]: run-netns-cni\x2de69f94b4\x2d5b83\x2dff1b\x2d44f7\x2d4d943aaa5943.mount: Deactivated successfully. Feb 12 19:45:08.349375 systemd-networkd[1592]: cali51a79a28a68: Link UP Feb 12 19:45:08.362911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:45:08.363006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali51a79a28a68: link becomes ready Feb 12 19:45:08.365671 systemd-networkd[1592]: cali51a79a28a68: Gained carrier Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.230 [INFO][4344] utils.go 100: File /var/lib/calico/mtu does not exist Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.240 [INFO][4344] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0 coredns-787d4945fb- kube-system 4f0b1352-378c-4dae-bd2a-c8486e2500ed 820 0 2024-02-12 19:44:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 coredns-787d4945fb-ws7rt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51a79a28a68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.240 [INFO][4344] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.265 [INFO][4355] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" HandleID="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.278 [INFO][4355] ipam_plugin.go 268: Auto assigning IP ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" HandleID="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"coredns-787d4945fb-ws7rt", "timestamp":"2024-02-12 19:45:08.265899432 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.278 [INFO][4355] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.278 [INFO][4355] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.278 [INFO][4355] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.279 [INFO][4355] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.283 [INFO][4355] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.286 [INFO][4355] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.293 [INFO][4355] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.295 [INFO][4355] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.295 [INFO][4355] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.296 [INFO][4355] ipam.go 1682: Creating new handle: k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14 Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.299 [INFO][4355] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.303 [INFO][4355] ipam.go 1216: Successfully claimed IPs: [192.168.124.1/26] block=192.168.124.0/26 handle="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.303 [INFO][4355] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.1/26] handle="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.304 [INFO][4355] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:08.375527 env[1420]: 2024-02-12 19:45:08.304 [INFO][4355] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.1/26] IPv6=[] ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" HandleID="k8s-pod-network.5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.307 [INFO][4344] k8s.go 385: Populated endpoint ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"4f0b1352-378c-4dae-bd2a-c8486e2500ed", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"coredns-787d4945fb-ws7rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51a79a28a68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.308 [INFO][4344] k8s.go 386: Calico CNI using IPs: [192.168.124.1/32] ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.308 [INFO][4344] dataplane_linux.go 68: Setting the host side veth name to cali51a79a28a68 ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.366 [INFO][4344] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.366 [INFO][4344] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"4f0b1352-378c-4dae-bd2a-c8486e2500ed", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14", Pod:"coredns-787d4945fb-ws7rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51a79a28a68", MAC:"0a:5b:e3:a9:45:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:08.376573 env[1420]: 2024-02-12 19:45:08.373 [INFO][4344] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14" Namespace="kube-system" Pod="coredns-787d4945fb-ws7rt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:08.400058 env[1420]: time="2024-02-12T19:45:08.399983647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:08.400058 env[1420]: time="2024-02-12T19:45:08.400019147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:08.400279 env[1420]: time="2024-02-12T19:45:08.400243748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:08.400505 env[1420]: time="2024-02-12T19:45:08.400461149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14 pid=4403 runtime=io.containerd.runc.v2 Feb 12 19:45:08.451045 env[1420]: time="2024-02-12T19:45:08.451002481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ws7rt,Uid:4f0b1352-378c-4dae-bd2a-c8486e2500ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14\"" Feb 12 19:45:08.454457 env[1420]: time="2024-02-12T19:45:08.454110095Z" level=info msg="CreateContainer within sandbox \"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:08.501114 env[1420]: time="2024-02-12T19:45:08.501065510Z" level=info msg="CreateContainer within sandbox \"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c07c74c36723088a38e8ada25ad4cf2783adbb15619884cbdf84a5b8fd6405e\"" Feb 12 19:45:08.503706 env[1420]: time="2024-02-12T19:45:08.501727613Z" level=info msg="StartContainer for \"6c07c74c36723088a38e8ada25ad4cf2783adbb15619884cbdf84a5b8fd6405e\"" Feb 12 19:45:08.549915 env[1420]: time="2024-02-12T19:45:08.549845734Z" level=info msg="StartContainer for \"6c07c74c36723088a38e8ada25ad4cf2783adbb15619884cbdf84a5b8fd6405e\" returns successfully" Feb 12 19:45:08.584208 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.HmnEOp.mount: Deactivated successfully. Feb 12 19:45:08.680000 audit[4510]: AVC avc: denied { write } for pid=4510 comm="tee" name="fd" dev="proc" ino=33511 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.686827 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 12 19:45:08.686916 kernel: audit: type=1400 audit(1707767108.680:296): avc: denied { write } for pid=4510 comm="tee" name="fd" dev="proc" ino=33511 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.686000 audit[4524]: AVC avc: denied { write } for pid=4524 comm="tee" name="fd" dev="proc" ino=34453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.719564 kernel: audit: type=1400 audit(1707767108.686:297): avc: denied { write } for pid=4524 comm="tee" name="fd" dev="proc" ino=34453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.740928 kernel: audit: type=1300 audit(1707767108.686:297): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2e2a1971 a2=241 a3=1b6 items=1 ppid=4491 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.686000 audit[4524]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2e2a1971 a2=241 a3=1b6 items=1 ppid=4491 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.686000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 19:45:08.686000 audit: PATH item=0 name="/dev/fd/63" inode=34447 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.762957 kernel: audit: type=1307 audit(1707767108.686:297): cwd="/etc/service/enabled/felix/log" Feb 12 19:45:08.763047 kernel: audit: type=1302 audit(1707767108.686:297): item=0 name="/dev/fd/63" inode=34447 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.686000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.776354 kernel: audit: type=1327 audit(1707767108.686:297): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.680000 audit[4510]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0c1e9973 a2=241 a3=1b6 items=1 ppid=4488 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.798357 kernel: audit: type=1300 audit(1707767108.680:296): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0c1e9973 a2=241 a3=1b6 items=1 ppid=4488 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.680000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 19:45:08.680000 audit: PATH item=0 name="/dev/fd/63" inode=33494 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.819103 kernel: audit: type=1307 audit(1707767108.680:296): cwd="/etc/service/enabled/cni/log" Feb 12 19:45:08.819188 kernel: audit: type=1302 audit(1707767108.680:296): item=0 name="/dev/fd/63" inode=33494 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.819218 kernel: audit: type=1327 audit(1707767108.680:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.734000 audit[4533]: AVC avc: denied { write } for pid=4533 comm="tee" name="fd" dev="proc" ino=34483 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.734000 audit[4533]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb6cba972 a2=241 a3=1b6 items=1 ppid=4485 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.734000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 19:45:08.734000 audit: PATH item=0 name="/dev/fd/63" inode=34461 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.734000 audit[4539]: AVC avc: denied { write } for pid=4539 comm="tee" name="fd" dev="proc" ino=34487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.734000 audit[4539]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8dd29971 a2=241 a3=1b6 items=1 ppid=4484 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.734000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 19:45:08.734000 audit: PATH item=0 name="/dev/fd/63" inode=34466 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.748000 audit[4546]: AVC avc: denied { write } for pid=4546 comm="tee" name="fd" dev="proc" ino=34496 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.748000 audit[4546]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce3560961 a2=241 a3=1b6 items=1 ppid=4495 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.748000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:45:08.748000 audit: PATH item=0 name="/dev/fd/63" inode=34474 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.748000 audit[4548]: AVC avc: denied { write } for pid=4548 comm="tee" name="fd" dev="proc" ino=34500 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.748000 audit[4548]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff78439962 a2=241 a3=1b6 items=1 ppid=4490 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.748000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:45:08.748000 audit: PATH item=0 name="/dev/fd/63" inode=34477 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:08.791000 audit[4562]: AVC avc: denied { write } for pid=4562 comm="tee" name="fd" dev="proc" ino=34517 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:45:08.791000 audit[4562]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc114a7971 a2=241 a3=1b6 items=1 ppid=4496 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:08.791000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 19:45:08.791000 audit: PATH item=0 name="/dev/fd/63" inode=34514 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:08.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.149000 audit: BPF prog-id=10 op=LOAD Feb 12 19:45:09.149000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc95051f70 a2=70 a3=7f5153cc9000 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.149000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.153000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.153000 audit: BPF prog-id=11 op=LOAD Feb 12 19:45:09.153000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc95051f70 a2=70 a3=6e items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.153000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.154000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc95051f20 a2=70 a3=7ffc95051f70 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.154000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.154000 audit: BPF prog-id=12 op=LOAD Feb 12 19:45:09.154000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc95051f00 a2=70 a3=7ffc95051f70 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.154000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.155000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:45:09.155000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.155000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc95051fe0 a2=70 a3=0 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.155000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.155000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc95051fd0 a2=70 a3=0 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.155000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.155000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc95052010 a2=70 a3=0 items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { perfmon } for pid=4621 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit[4621]: AVC avc: denied { bpf } for pid=4621 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.156000 audit: BPF prog-id=13 op=LOAD Feb 12 19:45:09.156000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc95051f30 a2=70 a3=ffffffff items=0 ppid=4492 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.156000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:45:09.160000 audit[4623]: AVC avc: denied { bpf } for pid=4623 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.160000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe02b13af0 a2=70 a3=fff80800 items=0 ppid=4492 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:45:09.160000 audit[4623]: AVC avc: denied { bpf } for pid=4623 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:45:09.160000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe02b139c0 a2=70 a3=3 items=0 ppid=4492 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:45:09.175000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:45:09.282393 kubelet[2648]: I0212 19:45:09.279368 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ws7rt" podStartSLOduration=63.279306666 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:09.278910564 +0000 UTC m=+75.449738500" watchObservedRunningTime="2024-02-12 19:45:09.279306666 +0000 UTC m=+75.450134602" Feb 12 19:45:09.328000 audit[4654]: NETFILTER_CFG table=mangle:119 family=2 entries=19 op=nft_register_chain pid=4654 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:09.328000 audit[4654]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffcb71873d0 a2=0 a3=7ffcb71873bc items=0 ppid=4492 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.328000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:09.339000 audit[4661]: NETFILTER_CFG table=nat:120 family=2 entries=16 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:09.339000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fffea874400 a2=0 a3=7fffea8743ec items=0 ppid=4492 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.339000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:09.342000 audit[4663]: NETFILTER_CFG table=filter:121 family=2 entries=71 op=nft_register_chain pid=4663 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:09.342000 audit[4663]: SYSCALL arch=c000003e syscall=46 success=yes exit=36636 a0=3 a1=7ffe35ed6f40 a2=0 a3=555f543a7000 items=0 ppid=4492 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:09.361000 audit[4682]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:09.361000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc18ec0d60 a2=0 a3=7ffc18ec0d4c items=0 ppid=2844 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:09.362000 audit[4682]: NETFILTER_CFG table=nat:123 family=2 entries=30 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:09.362000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc18ec0d60 a2=0 a3=7ffc18ec0d4c items=0 ppid=2844 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:09.366000 audit[4658]: NETFILTER_CFG table=raw:124 family=2 entries=19 op=nft_register_chain pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:09.366000 audit[4658]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffdb7572970 a2=0 a3=55ed39eb0000 items=0 ppid=4492 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.366000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:09.406000 audit[4708]: NETFILTER_CFG table=filter:125 family=2 entries=9 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:09.406000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffbd54e480 a2=0 a3=7fffbd54e46c items=0 ppid=2844 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:09.408000 audit[4708]: NETFILTER_CFG table=nat:126 family=2 entries=51 op=nft_register_chain pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:09.408000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7fffbd54e480 a2=0 a3=7fffbd54e46c items=0 ppid=2844 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:09.472530 systemd-networkd[1592]: cali51a79a28a68: Gained IPv6LL Feb 12 19:45:10.101235 systemd-networkd[1592]: vxlan.calico: Link UP Feb 12 19:45:10.101245 systemd-networkd[1592]: vxlan.calico: Gained carrier Feb 12 19:45:11.904728 systemd-networkd[1592]: vxlan.calico: Gained IPv6LL Feb 12 19:45:19.065030 env[1420]: time="2024-02-12T19:45:19.062962904Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.110 [INFO][4738] k8s.go 578: Cleaning up netns ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.110 [INFO][4738] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" iface="eth0" netns="/var/run/netns/cni-f5d162d4-816f-fee1-8863-eef15b1bc686" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.117 [INFO][4738] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" iface="eth0" netns="/var/run/netns/cni-f5d162d4-816f-fee1-8863-eef15b1bc686" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.117 [INFO][4738] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" iface="eth0" netns="/var/run/netns/cni-f5d162d4-816f-fee1-8863-eef15b1bc686" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.117 [INFO][4738] k8s.go 585: Releasing IP address(es) ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.117 [INFO][4738] utils.go 188: Calico CNI releasing IP address ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.147 [INFO][4744] ipam_plugin.go 415: Releasing address using handleID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.148 [INFO][4744] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.148 [INFO][4744] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.155 [WARNING][4744] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.155 [INFO][4744] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.157 [INFO][4744] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:19.159318 env[1420]: 2024-02-12 19:45:19.158 [INFO][4738] k8s.go 591: Teardown processing complete. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:19.160873 env[1420]: time="2024-02-12T19:45:19.160821118Z" level=info msg="TearDown network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" successfully" Feb 12 19:45:19.161034 env[1420]: time="2024-02-12T19:45:19.161011219Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" returns successfully" Feb 12 19:45:19.161812 env[1420]: time="2024-02-12T19:45:19.161776822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6846549b-nwjfr,Uid:7fcdb66b-2bf1-45a1-90f0-a496bd670686,Namespace:calico-system,Attempt:1,}" Feb 12 19:45:19.163322 systemd[1]: run-netns-cni\x2df5d162d4\x2d816f\x2dfee1\x2d8863\x2deef15b1bc686.mount: Deactivated successfully. Feb 12 19:45:19.494156 systemd-networkd[1592]: calif49bce17b1a: Link UP Feb 12 19:45:19.506476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:45:19.506555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif49bce17b1a: link becomes ready Feb 12 19:45:19.515103 systemd-networkd[1592]: calif49bce17b1a: Gained carrier Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.407 [INFO][4758] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0 calico-kube-controllers-6c6846549b- calico-system 7fcdb66b-2bf1-45a1-90f0-a496bd670686 857 0 2024-02-12 19:44:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c6846549b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 calico-kube-controllers-6c6846549b-nwjfr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif49bce17b1a [] []}} ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.407 [INFO][4758] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.430 [INFO][4771] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" HandleID="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.438 [INFO][4771] ipam_plugin.go 268: Auto assigning IP ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" HandleID="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051d40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"calico-kube-controllers-6c6846549b-nwjfr", "timestamp":"2024-02-12 19:45:19.430015056 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.438 [INFO][4771] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.438 [INFO][4771] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.438 [INFO][4771] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.440 [INFO][4771] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.454 [INFO][4771] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.458 [INFO][4771] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.459 [INFO][4771] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.461 [INFO][4771] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.461 [INFO][4771] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.462 [INFO][4771] ipam.go 1682: Creating new handle: k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.471 [INFO][4771] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.476 [INFO][4771] ipam.go 1216: Successfully claimed IPs: [192.168.124.2/26] block=192.168.124.0/26 handle="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.476 [INFO][4771] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.2/26] handle="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.476 [INFO][4771] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:19.530702 env[1420]: 2024-02-12 19:45:19.476 [INFO][4771] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.2/26] IPv6=[] ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" HandleID="k8s-pod-network.06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.483 [INFO][4758] k8s.go 385: Populated endpoint ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0", GenerateName:"calico-kube-controllers-6c6846549b-", Namespace:"calico-system", SelfLink:"", UID:"7fcdb66b-2bf1-45a1-90f0-a496bd670686", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6846549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"calico-kube-controllers-6c6846549b-nwjfr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif49bce17b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.483 [INFO][4758] k8s.go 386: Calico CNI using IPs: [192.168.124.2/32] ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.483 [INFO][4758] dataplane_linux.go 68: Setting the host side veth name to calif49bce17b1a ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.515 [INFO][4758] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.515 [INFO][4758] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0", GenerateName:"calico-kube-controllers-6c6846549b-", Namespace:"calico-system", SelfLink:"", UID:"7fcdb66b-2bf1-45a1-90f0-a496bd670686", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6846549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e", Pod:"calico-kube-controllers-6c6846549b-nwjfr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif49bce17b1a", MAC:"b6:de:81:d6:b0:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:19.531572 env[1420]: 2024-02-12 19:45:19.529 [INFO][4758] k8s.go 491: Wrote updated endpoint to datastore ContainerID="06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e" Namespace="calico-system" Pod="calico-kube-controllers-6c6846549b-nwjfr" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:19.574836 env[1420]: time="2024-02-12T19:45:19.574774668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:19.574976 env[1420]: time="2024-02-12T19:45:19.574850868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:19.574976 env[1420]: time="2024-02-12T19:45:19.574877268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:19.575186 env[1420]: time="2024-02-12T19:45:19.575133369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e pid=4795 runtime=io.containerd.runc.v2 Feb 12 19:45:19.695925 kernel: kauditd_printk_skb: 120 callbacks suppressed Feb 12 19:45:19.696041 kernel: audit: type=1325 audit(1707767119.677:325): table=filter:127 family=2 entries=40 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:19.677000 audit[4829]: NETFILTER_CFG table=filter:127 family=2 entries=40 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:19.677000 audit[4829]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffebfcb5f80 a2=0 a3=7ffebfcb5f6c items=0 ppid=4492 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:19.714578 env[1420]: time="2024-02-12T19:45:19.714539359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6846549b-nwjfr,Uid:7fcdb66b-2bf1-45a1-90f0-a496bd670686,Namespace:calico-system,Attempt:1,} returns sandbox id \"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e\"" Feb 12 19:45:19.717989 env[1420]: time="2024-02-12T19:45:19.717967873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 12 19:45:19.723482 kernel: audit: type=1300 audit(1707767119.677:325): arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffebfcb5f80 a2=0 a3=7ffebfcb5f6c items=0 ppid=4492 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:19.677000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:19.737391 kernel: audit: type=1327 audit(1707767119.677:325): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:20.064058 env[1420]: time="2024-02-12T19:45:20.064008935Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.100 [INFO][4848] k8s.go 578: Cleaning up netns ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.100 [INFO][4848] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" iface="eth0" netns="/var/run/netns/cni-ea9fd581-c3db-63f2-5d01-4480a4cd739a" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.103 [INFO][4848] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" iface="eth0" netns="/var/run/netns/cni-ea9fd581-c3db-63f2-5d01-4480a4cd739a" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.103 [INFO][4848] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" iface="eth0" netns="/var/run/netns/cni-ea9fd581-c3db-63f2-5d01-4480a4cd739a" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.103 [INFO][4848] k8s.go 585: Releasing IP address(es) ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.103 [INFO][4848] utils.go 188: Calico CNI releasing IP address ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.121 [INFO][4854] ipam_plugin.go 415: Releasing address using handleID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.121 [INFO][4854] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.121 [INFO][4854] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.126 [WARNING][4854] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.126 [INFO][4854] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.128 [INFO][4854] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:20.129882 env[1420]: 2024-02-12 19:45:20.128 [INFO][4848] k8s.go 591: Teardown processing complete. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:20.130876 env[1420]: time="2024-02-12T19:45:20.130088112Z" level=info msg="TearDown network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" successfully" Feb 12 19:45:20.130876 env[1420]: time="2024-02-12T19:45:20.130135013Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" returns successfully" Feb 12 19:45:20.130876 env[1420]: time="2024-02-12T19:45:20.130826816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j8xlt,Uid:57d1cdf4-1214-4bf4-ab2d-b4fcd203788b,Namespace:kube-system,Attempt:1,}" Feb 12 19:45:20.164515 systemd[1]: run-netns-cni\x2dea9fd581\x2dc3db\x2d63f2\x2d5d01\x2d4480a4cd739a.mount: Deactivated successfully. Feb 12 19:45:20.546968 systemd-networkd[1592]: calif49bce17b1a: Gained IPv6LL Feb 12 19:45:20.556498 systemd-networkd[1592]: calia0503685bb3: Link UP Feb 12 19:45:20.568281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:45:20.568373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia0503685bb3: link becomes ready Feb 12 19:45:20.571645 systemd-networkd[1592]: calia0503685bb3: Gained carrier Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.496 [INFO][4861] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0 coredns-787d4945fb- kube-system 57d1cdf4-1214-4bf4-ab2d-b4fcd203788b 864 0 2024-02-12 19:44:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 coredns-787d4945fb-j8xlt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0503685bb3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.496 [INFO][4861] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.520 [INFO][4873] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" HandleID="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.527 [INFO][4873] ipam_plugin.go 268: Auto assigning IP ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" HandleID="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"coredns-787d4945fb-j8xlt", "timestamp":"2024-02-12 19:45:20.520153551 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.529 [INFO][4873] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.529 [INFO][4873] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.529 [INFO][4873] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.530 [INFO][4873] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.533 [INFO][4873] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.536 [INFO][4873] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.537 [INFO][4873] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.539 [INFO][4873] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.539 [INFO][4873] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.540 [INFO][4873] ipam.go 1682: Creating new handle: k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8 Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.543 [INFO][4873] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.549 [INFO][4873] ipam.go 1216: Successfully claimed IPs: [192.168.124.3/26] block=192.168.124.0/26 handle="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.549 [INFO][4873] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.3/26] handle="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.550 [INFO][4873] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:20.579545 env[1420]: 2024-02-12 19:45:20.550 [INFO][4873] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.3/26] IPv6=[] ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" HandleID="k8s-pod-network.2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.552 [INFO][4861] k8s.go 385: Populated endpoint ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"coredns-787d4945fb-j8xlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0503685bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.552 [INFO][4861] k8s.go 386: Calico CNI using IPs: [192.168.124.3/32] ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.552 [INFO][4861] dataplane_linux.go 68: Setting the host side veth name to calia0503685bb3 ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.569 [INFO][4861] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.569 [INFO][4861] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8", Pod:"coredns-787d4945fb-j8xlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0503685bb3", MAC:"b2:78:54:e0:bc:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:20.580913 env[1420]: 2024-02-12 19:45:20.577 [INFO][4861] k8s.go 491: Wrote updated endpoint to datastore ContainerID="2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8" Namespace="kube-system" Pod="coredns-787d4945fb-j8xlt" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:20.593000 audit[4890]: NETFILTER_CFG table=filter:128 family=2 entries=34 op=nft_register_chain pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:20.608775 kernel: audit: type=1325 audit(1707767120.593:326): table=filter:128 family=2 entries=34 op=nft_register_chain pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:20.593000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7ffd731baeb0 a2=0 a3=7ffd731bae9c items=0 ppid=4492 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:20.630352 kernel: audit: type=1300 audit(1707767120.593:326): arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7ffd731baeb0 a2=0 a3=7ffd731bae9c items=0 ppid=4492 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:20.630414 kernel: audit: type=1327 audit(1707767120.593:326): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:20.593000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:20.636826 env[1420]: time="2024-02-12T19:45:20.636779341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:20.636945 env[1420]: time="2024-02-12T19:45:20.636925042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:20.637024 env[1420]: time="2024-02-12T19:45:20.637006842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:20.637312 env[1420]: time="2024-02-12T19:45:20.637280043Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8 pid=4898 runtime=io.containerd.runc.v2 Feb 12 19:45:20.716986 env[1420]: time="2024-02-12T19:45:20.716934978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j8xlt,Uid:57d1cdf4-1214-4bf4-ab2d-b4fcd203788b,Namespace:kube-system,Attempt:1,} returns sandbox id \"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8\"" Feb 12 19:45:20.721436 env[1420]: time="2024-02-12T19:45:20.721401597Z" level=info msg="CreateContainer within sandbox \"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:21.063486 env[1420]: time="2024-02-12T19:45:21.063436332Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:45:21.121864 env[1420]: time="2024-02-12T19:45:21.121808076Z" level=info msg="CreateContainer within sandbox \"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bc03a2ebc62d86ef2209315c93148d405314dd78370b707b7a02aca56f83920\"" Feb 12 19:45:21.122716 env[1420]: time="2024-02-12T19:45:21.122684480Z" level=info msg="StartContainer for \"6bc03a2ebc62d86ef2209315c93148d405314dd78370b707b7a02aca56f83920\"" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.119 [INFO][4948] k8s.go 578: Cleaning up netns ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.120 [INFO][4948] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" iface="eth0" netns="/var/run/netns/cni-f736a927-e368-a132-e2de-c255e601e8e3" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.120 [INFO][4948] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" iface="eth0" netns="/var/run/netns/cni-f736a927-e368-a132-e2de-c255e601e8e3" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.120 [INFO][4948] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" iface="eth0" netns="/var/run/netns/cni-f736a927-e368-a132-e2de-c255e601e8e3" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.120 [INFO][4948] k8s.go 585: Releasing IP address(es) ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.120 [INFO][4948] utils.go 188: Calico CNI releasing IP address ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.177 [INFO][4954] ipam_plugin.go 415: Releasing address using handleID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.177 [INFO][4954] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.177 [INFO][4954] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.200 [WARNING][4954] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.221 [INFO][4954] ipam_plugin.go 443: Releasing address using workloadID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.222 [INFO][4954] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:21.225383 env[1420]: 2024-02-12 19:45:21.224 [INFO][4948] k8s.go 591: Teardown processing complete. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:21.230012 env[1420]: time="2024-02-12T19:45:21.225532809Z" level=info msg="TearDown network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" successfully" Feb 12 19:45:21.230012 env[1420]: time="2024-02-12T19:45:21.225567609Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" returns successfully" Feb 12 19:45:21.230012 env[1420]: time="2024-02-12T19:45:21.228822123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z7w9,Uid:7deb942c-192b-42b5-8511-5e8ab4d0a3b5,Namespace:calico-system,Attempt:1,}" Feb 12 19:45:21.231689 systemd[1]: run-netns-cni\x2df736a927\x2de368\x2da132\x2de2de\x2dc255e601e8e3.mount: Deactivated successfully. Feb 12 19:45:21.276328 env[1420]: time="2024-02-12T19:45:21.276138220Z" level=info msg="StartContainer for \"6bc03a2ebc62d86ef2209315c93148d405314dd78370b707b7a02aca56f83920\" returns successfully" Feb 12 19:45:21.311475 kubelet[2648]: I0212 19:45:21.311404 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-j8xlt" podStartSLOduration=75.311366267 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:21.310852765 +0000 UTC m=+87.481680601" watchObservedRunningTime="2024-02-12 19:45:21.311366267 +0000 UTC m=+87.482194203" Feb 12 19:45:21.355000 audit[5021]: NETFILTER_CFG table=filter:129 family=2 entries=6 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:21.355000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc76c6e370 a2=0 a3=7ffc76c6e35c items=0 ppid=2844 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:21.387741 kernel: audit: type=1325 audit(1707767121.355:327): table=filter:129 family=2 entries=6 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:21.387831 kernel: audit: type=1300 audit(1707767121.355:327): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc76c6e370 a2=0 a3=7ffc76c6e35c items=0 ppid=2844 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:21.389361 kernel: audit: type=1327 audit(1707767121.355:327): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:21.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:21.358000 audit[5021]: NETFILTER_CFG table=nat:130 family=2 entries=60 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:21.358000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc76c6e370 a2=0 a3=7ffc76c6e35c items=0 ppid=2844 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:21.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:21.410359 kernel: audit: type=1325 audit(1707767121.358:328): table=nat:130 family=2 entries=60 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:21.864558 systemd-networkd[1592]: cali4ce175505ab: Link UP Feb 12 19:45:21.879638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:45:21.879726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4ce175505ab: link becomes ready Feb 12 19:45:21.880377 systemd-networkd[1592]: cali4ce175505ab: Gained carrier Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.802 [INFO][5023] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0 csi-node-driver- calico-system 7deb942c-192b-42b5-8511-5e8ab4d0a3b5 872 0 2024-02-12 19:44:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 csi-node-driver-8z7w9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali4ce175505ab [] []}} ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.802 [INFO][5023] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.828 [INFO][5034] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" HandleID="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.841 [INFO][5034] ipam_plugin.go 268: Auto assigning IP ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" HandleID="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002914e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"csi-node-driver-8z7w9", "timestamp":"2024-02-12 19:45:21.828856428 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.841 [INFO][5034] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.841 [INFO][5034] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.841 [INFO][5034] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.842 [INFO][5034] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.845 [INFO][5034] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.848 [INFO][5034] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.850 [INFO][5034] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.851 [INFO][5034] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.852 [INFO][5034] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.853 [INFO][5034] ipam.go 1682: Creating new handle: k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.856 [INFO][5034] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.860 [INFO][5034] ipam.go 1216: Successfully claimed IPs: [192.168.124.4/26] block=192.168.124.0/26 handle="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.860 [INFO][5034] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.4/26] handle="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.860 [INFO][5034] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:21.906324 env[1420]: 2024-02-12 19:45:21.860 [INFO][5034] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.4/26] IPv6=[] ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" HandleID="k8s-pod-network.2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.862 [INFO][5023] k8s.go 385: Populated endpoint ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7deb942c-192b-42b5-8511-5e8ab4d0a3b5", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"csi-node-driver-8z7w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4ce175505ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.862 [INFO][5023] k8s.go 386: Calico CNI using IPs: [192.168.124.4/32] ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.862 [INFO][5023] dataplane_linux.go 68: Setting the host side veth name to cali4ce175505ab ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.880 [INFO][5023] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.881 [INFO][5023] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7deb942c-192b-42b5-8511-5e8ab4d0a3b5", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b", Pod:"csi-node-driver-8z7w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4ce175505ab", MAC:"66:ab:ed:e2:ae:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:21.907221 env[1420]: 2024-02-12 19:45:21.904 [INFO][5023] k8s.go 491: Wrote updated endpoint to datastore ContainerID="2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b" Namespace="calico-system" Pod="csi-node-driver-8z7w9" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:21.921000 audit[5052]: NETFILTER_CFG table=filter:131 family=2 entries=42 op=nft_register_chain pid=5052 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:21.921000 audit[5052]: SYSCALL arch=c000003e syscall=46 success=yes exit=20696 a0=3 a1=7ffd2dafbdb0 a2=0 a3=7ffd2dafbd9c items=0 ppid=4492 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:21.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:21.984102 env[1420]: time="2024-02-12T19:45:21.984035876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:21.984265 env[1420]: time="2024-02-12T19:45:21.984077776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:21.984265 env[1420]: time="2024-02-12T19:45:21.984109077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:21.984467 env[1420]: time="2024-02-12T19:45:21.984391778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b pid=5063 runtime=io.containerd.runc.v2 Feb 12 19:45:22.016447 systemd-networkd[1592]: calia0503685bb3: Gained IPv6LL Feb 12 19:45:22.024361 env[1420]: time="2024-02-12T19:45:22.024098143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z7w9,Uid:7deb942c-192b-42b5-8511-5e8ab4d0a3b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b\"" Feb 12 19:45:22.367000 audit[5121]: NETFILTER_CFG table=filter:132 family=2 entries=6 op=nft_register_rule pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:22.367000 audit[5121]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff0d73d2b0 a2=0 a3=7fff0d73d29c items=0 ppid=2844 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:22.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:22.393000 audit[5121]: NETFILTER_CFG table=nat:133 family=2 entries=72 op=nft_register_chain pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:22.393000 audit[5121]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff0d73d2b0 a2=0 a3=7fff0d73d29c items=0 ppid=2844 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:22.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:23.744775 systemd-networkd[1592]: cali4ce175505ab: Gained IPv6LL Feb 12 19:45:28.661295 env[1420]: time="2024-02-12T19:45:28.661242857Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:28.668048 env[1420]: time="2024-02-12T19:45:28.668005684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:28.672899 env[1420]: time="2024-02-12T19:45:28.672856703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:28.677965 env[1420]: time="2024-02-12T19:45:28.677935524Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:28.678861 env[1420]: time="2024-02-12T19:45:28.678830127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 12 19:45:28.682307 env[1420]: time="2024-02-12T19:45:28.682150341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 19:45:28.700294 env[1420]: time="2024-02-12T19:45:28.700263313Z" level=info msg="CreateContainer within sandbox \"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 12 19:45:28.730464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843241439.mount: Deactivated successfully. Feb 12 19:45:28.743323 env[1420]: time="2024-02-12T19:45:28.743278786Z" level=info msg="CreateContainer within sandbox \"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07\"" Feb 12 19:45:28.744868 env[1420]: time="2024-02-12T19:45:28.743806088Z" level=info msg="StartContainer for \"906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07\"" Feb 12 19:45:28.828509 env[1420]: time="2024-02-12T19:45:28.828460228Z" level=info msg="StartContainer for \"906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07\" returns successfully" Feb 12 19:45:29.396667 kubelet[2648]: I0212 19:45:29.396620 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c6846549b-nwjfr" podStartSLOduration=-9.223371964458405e+09 pod.CreationTimestamp="2024-02-12 19:44:17 +0000 UTC" firstStartedPulling="2024-02-12 19:45:19.717541072 +0000 UTC m=+85.888368908" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:29.333695251 +0000 UTC m=+95.504523087" watchObservedRunningTime="2024-02-12 19:45:29.396372202 +0000 UTC m=+95.567200138" Feb 12 19:45:31.089711 kubelet[2648]: I0212 19:45:31.089638 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:31.117111 kubelet[2648]: I0212 19:45:31.117078 2648 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:31.218000 audit[5234]: NETFILTER_CFG table=filter:134 family=2 entries=7 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.224520 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 12 19:45:31.224613 kernel: audit: type=1325 audit(1707767131.218:332): table=filter:134 family=2 entries=7 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.218000 audit[5234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcd1eb1140 a2=0 a3=7ffcd1eb112c items=0 ppid=2844 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.256412 kernel: audit: type=1300 audit(1707767131.218:332): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcd1eb1140 a2=0 a3=7ffcd1eb112c items=0 ppid=2844 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.220000 audit[5234]: NETFILTER_CFG table=nat:135 family=2 entries=78 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.277214 kernel: audit: type=1327 audit(1707767131.218:332): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.277313 kernel: audit: type=1325 audit(1707767131.220:333): table=nat:135 family=2 entries=78 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.220000 audit[5234]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcd1eb1140 a2=0 a3=7ffcd1eb112c items=0 ppid=2844 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.299357 kernel: audit: type=1300 audit(1707767131.220:333): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcd1eb1140 a2=0 a3=7ffcd1eb112c items=0 ppid=2844 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.299692 kubelet[2648]: I0212 19:45:31.299668 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bfl\" (UniqueName: \"kubernetes.io/projected/390829b6-7c2a-4b29-9875-cdc698b9e922-kube-api-access-j9bfl\") pod \"calico-apiserver-7d78fbdbdf-z87k7\" (UID: \"390829b6-7c2a-4b29-9875-cdc698b9e922\") " pod="calico-apiserver/calico-apiserver-7d78fbdbdf-z87k7" Feb 12 19:45:31.299812 kubelet[2648]: I0212 19:45:31.299772 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn6ff\" (UniqueName: \"kubernetes.io/projected/428006d8-212b-4c2f-8faa-35433c1aecc2-kube-api-access-jn6ff\") pod \"calico-apiserver-7d78fbdbdf-bgtlc\" (UID: \"428006d8-212b-4c2f-8faa-35433c1aecc2\") " pod="calico-apiserver/calico-apiserver-7d78fbdbdf-bgtlc" Feb 12 19:45:31.299870 kubelet[2648]: I0212 19:45:31.299840 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/390829b6-7c2a-4b29-9875-cdc698b9e922-calico-apiserver-certs\") pod \"calico-apiserver-7d78fbdbdf-z87k7\" (UID: \"390829b6-7c2a-4b29-9875-cdc698b9e922\") " pod="calico-apiserver/calico-apiserver-7d78fbdbdf-z87k7" Feb 12 19:45:31.299918 kubelet[2648]: I0212 19:45:31.299896 2648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/428006d8-212b-4c2f-8faa-35433c1aecc2-calico-apiserver-certs\") pod \"calico-apiserver-7d78fbdbdf-bgtlc\" (UID: \"428006d8-212b-4c2f-8faa-35433c1aecc2\") " pod="calico-apiserver/calico-apiserver-7d78fbdbdf-bgtlc" Feb 12 19:45:31.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.311348 kernel: audit: type=1327 audit(1707767131.220:333): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.348000 audit[5260]: NETFILTER_CFG table=filter:136 family=2 entries=8 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.348000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc579001f0 a2=0 a3=7ffc579001dc items=0 ppid=2844 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.382211 kernel: audit: type=1325 audit(1707767131.348:334): table=filter:136 family=2 entries=8 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.382315 kernel: audit: type=1300 audit(1707767131.348:334): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc579001f0 a2=0 a3=7ffc579001dc items=0 ppid=2844 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.382360 kernel: audit: type=1327 audit(1707767131.348:334): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.349000 audit[5260]: NETFILTER_CFG table=nat:137 family=2 entries=78 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.401927 kubelet[2648]: E0212 19:45:31.400610 2648 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 19:45:31.401927 kubelet[2648]: E0212 19:45:31.400873 2648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/428006d8-212b-4c2f-8faa-35433c1aecc2-calico-apiserver-certs podName:428006d8-212b-4c2f-8faa-35433c1aecc2 nodeName:}" failed. No retries permitted until 2024-02-12 19:45:31.900848379 +0000 UTC m=+98.071676215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/428006d8-212b-4c2f-8faa-35433c1aecc2-calico-apiserver-certs") pod "calico-apiserver-7d78fbdbdf-bgtlc" (UID: "428006d8-212b-4c2f-8faa-35433c1aecc2") : secret "calico-apiserver-certs" not found Feb 12 19:45:31.401927 kubelet[2648]: E0212 19:45:31.401313 2648 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 19:45:31.401927 kubelet[2648]: E0212 19:45:31.401808 2648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/390829b6-7c2a-4b29-9875-cdc698b9e922-calico-apiserver-certs podName:390829b6-7c2a-4b29-9875-cdc698b9e922 nodeName:}" failed. No retries permitted until 2024-02-12 19:45:31.901789283 +0000 UTC m=+98.072617219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/390829b6-7c2a-4b29-9875-cdc698b9e922-calico-apiserver-certs") pod "calico-apiserver-7d78fbdbdf-z87k7" (UID: "390829b6-7c2a-4b29-9875-cdc698b9e922") : secret "calico-apiserver-certs" not found Feb 12 19:45:31.403723 kernel: audit: type=1325 audit(1707767131.349:335): table=nat:137 family=2 entries=78 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:31.349000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc579001f0 a2=0 a3=7ffc579001dc items=0 ppid=2844 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:31.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:31.710977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841602065.mount: Deactivated successfully. Feb 12 19:45:31.996101 env[1420]: time="2024-02-12T19:45:31.995913135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d78fbdbdf-z87k7,Uid:390829b6-7c2a-4b29-9875-cdc698b9e922,Namespace:calico-apiserver,Attempt:0,}" Feb 12 19:45:32.021314 env[1420]: time="2024-02-12T19:45:32.021264335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d78fbdbdf-bgtlc,Uid:428006d8-212b-4c2f-8faa-35433c1aecc2,Namespace:calico-apiserver,Attempt:0,}" Feb 12 19:45:33.787330 systemd-networkd[1592]: calid40d85e5bad: Link UP Feb 12 19:45:33.801030 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:45:33.801246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid40d85e5bad: link becomes ready Feb 12 19:45:33.801477 systemd-networkd[1592]: calid40d85e5bad: Gained carrier Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.707 [INFO][5267] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0 calico-apiserver-7d78fbdbdf- calico-apiserver 390829b6-7c2a-4b29-9875-cdc698b9e922 962 0 2024-02-12 19:45:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d78fbdbdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 calico-apiserver-7d78fbdbdf-z87k7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid40d85e5bad [] []}} ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.707 [INFO][5267] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.746 [INFO][5279] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" HandleID="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.756 [INFO][5279] ipam_plugin.go 268: Auto assigning IP ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" HandleID="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027caf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"calico-apiserver-7d78fbdbdf-z87k7", "timestamp":"2024-02-12 19:45:33.746704221 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.756 [INFO][5279] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.756 [INFO][5279] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.756 [INFO][5279] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.758 [INFO][5279] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.762 [INFO][5279] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.764 [INFO][5279] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.766 [INFO][5279] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.768 [INFO][5279] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.768 [INFO][5279] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.769 [INFO][5279] ipam.go 1682: Creating new handle: k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08 Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.773 [INFO][5279] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.781 [INFO][5279] ipam.go 1216: Successfully claimed IPs: [192.168.124.5/26] block=192.168.124.0/26 handle="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.781 [INFO][5279] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.5/26] handle="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.781 [INFO][5279] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:33.824936 env[1420]: 2024-02-12 19:45:33.781 [INFO][5279] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.5/26] IPv6=[] ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" HandleID="k8s-pod-network.248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.784 [INFO][5267] k8s.go 385: Populated endpoint ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0", GenerateName:"calico-apiserver-7d78fbdbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"390829b6-7c2a-4b29-9875-cdc698b9e922", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d78fbdbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"calico-apiserver-7d78fbdbdf-z87k7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid40d85e5bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.784 [INFO][5267] k8s.go 386: Calico CNI using IPs: [192.168.124.5/32] ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.784 [INFO][5267] dataplane_linux.go 68: Setting the host side veth name to calid40d85e5bad ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.790 [INFO][5267] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.790 [INFO][5267] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0", GenerateName:"calico-apiserver-7d78fbdbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"390829b6-7c2a-4b29-9875-cdc698b9e922", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d78fbdbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08", Pod:"calico-apiserver-7d78fbdbdf-z87k7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid40d85e5bad", MAC:"3a:69:e1:2e:44:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:33.826168 env[1420]: 2024-02-12 19:45:33.822 [INFO][5267] k8s.go 491: Wrote updated endpoint to datastore ContainerID="248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-z87k7" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--z87k7-eth0" Feb 12 19:45:33.875000 audit[5325]: NETFILTER_CFG table=filter:138 family=2 entries=59 op=nft_register_chain pid=5325 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:33.875000 audit[5325]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7fff2be5a890 a2=0 a3=7fff2be5a87c items=0 ppid=4492 pid=5325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:33.875000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:33.896252 env[1420]: time="2024-02-12T19:45:33.896179608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:33.896456 env[1420]: time="2024-02-12T19:45:33.896425609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:33.896595 env[1420]: time="2024-02-12T19:45:33.896563309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:33.896843 env[1420]: time="2024-02-12T19:45:33.896803710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08 pid=5333 runtime=io.containerd.runc.v2 Feb 12 19:45:33.960436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibf108b327e5: link becomes ready Feb 12 19:45:33.961438 systemd-networkd[1592]: calibf108b327e5: Link UP Feb 12 19:45:33.961692 systemd-networkd[1592]: calibf108b327e5: Gained carrier Feb 12 19:45:34.006000 audit[5366]: NETFILTER_CFG table=filter:139 family=2 entries=50 op=nft_register_chain pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:45:34.006000 audit[5366]: SYSCALL arch=c000003e syscall=46 success=yes exit=24496 a0=3 a1=7ffe13431000 a2=0 a3=7ffe13430fec items=0 ppid=4492 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:34.006000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:45:34.024760 env[1420]: time="2024-02-12T19:45:34.010043954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d78fbdbdf-z87k7,Uid:390829b6-7c2a-4b29-9875-cdc698b9e922,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08\"" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.835 [INFO][5283] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0 calico-apiserver-7d78fbdbdf- calico-apiserver 428006d8-212b-4c2f-8faa-35433c1aecc2 969 0 2024-02-12 19:45:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d78fbdbdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-c8dbf10a06 calico-apiserver-7d78fbdbdf-bgtlc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf108b327e5 [] []}} ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.841 [INFO][5283] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.898 [INFO][5318] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" HandleID="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.915 [INFO][5318] ipam_plugin.go 268: Auto assigning IP ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" HandleID="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-c8dbf10a06", "pod":"calico-apiserver-7d78fbdbdf-bgtlc", "timestamp":"2024-02-12 19:45:33.898256816 +0000 UTC"}, Hostname:"ci-3510.3.2-a-c8dbf10a06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.915 [INFO][5318] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.915 [INFO][5318] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.915 [INFO][5318] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-c8dbf10a06' Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.917 [INFO][5318] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.922 [INFO][5318] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.930 [INFO][5318] ipam.go 489: Trying affinity for 192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.932 [INFO][5318] ipam.go 155: Attempting to load block cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.935 [INFO][5318] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.935 [INFO][5318] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.936 [INFO][5318] ipam.go 1682: Creating new handle: k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.940 [INFO][5318] ipam.go 1203: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.948 [INFO][5318] ipam.go 1216: Successfully claimed IPs: [192.168.124.6/26] block=192.168.124.0/26 handle="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.948 [INFO][5318] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.6/26] handle="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" host="ci-3510.3.2-a-c8dbf10a06" Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.948 [INFO][5318] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:34.025693 env[1420]: 2024-02-12 19:45:33.948 [INFO][5318] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.124.6/26] IPv6=[] ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" HandleID="k8s-pod-network.70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.950 [INFO][5283] k8s.go 385: Populated endpoint ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0", GenerateName:"calico-apiserver-7d78fbdbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"428006d8-212b-4c2f-8faa-35433c1aecc2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d78fbdbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"", Pod:"calico-apiserver-7d78fbdbdf-bgtlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf108b327e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.950 [INFO][5283] k8s.go 386: Calico CNI using IPs: [192.168.124.6/32] ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.950 [INFO][5283] dataplane_linux.go 68: Setting the host side veth name to calibf108b327e5 ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.960 [INFO][5283] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.975 [INFO][5283] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0", GenerateName:"calico-apiserver-7d78fbdbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"428006d8-212b-4c2f-8faa-35433c1aecc2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d78fbdbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd", Pod:"calico-apiserver-7d78fbdbdf-bgtlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf108b327e5", MAC:"0a:99:bf:ae:e0:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:34.026647 env[1420]: 2024-02-12 19:45:33.986 [INFO][5283] k8s.go 491: Wrote updated endpoint to datastore ContainerID="70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd" Namespace="calico-apiserver" Pod="calico-apiserver-7d78fbdbdf-bgtlc" WorkloadEndpoint="ci--3510.3.2--a--c8dbf10a06-k8s-calico--apiserver--7d78fbdbdf--bgtlc-eth0" Feb 12 19:45:34.182661 env[1420]: time="2024-02-12T19:45:34.182584528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:34.182910 env[1420]: time="2024-02-12T19:45:34.182884029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:34.183030 env[1420]: time="2024-02-12T19:45:34.183009330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:34.184627 env[1420]: time="2024-02-12T19:45:34.183330731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd pid=5389 runtime=io.containerd.runc.v2 Feb 12 19:45:34.235709 env[1420]: time="2024-02-12T19:45:34.235669736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d78fbdbdf-bgtlc,Uid:428006d8-212b-4c2f-8faa-35433c1aecc2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd\"" Feb 12 19:45:34.944819 systemd-networkd[1592]: calid40d85e5bad: Gained IPv6LL Feb 12 19:45:35.533687 env[1420]: time="2024-02-12T19:45:35.533618697Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:35.620186 env[1420]: time="2024-02-12T19:45:35.620137933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:35.675015 env[1420]: time="2024-02-12T19:45:35.674969847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:35.721734 env[1420]: time="2024-02-12T19:45:35.721680628Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:35.723422 env[1420]: time="2024-02-12T19:45:35.723377435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 12 19:45:35.726895 env[1420]: time="2024-02-12T19:45:35.726858548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 19:45:35.728862 env[1420]: time="2024-02-12T19:45:35.728827956Z" level=info msg="CreateContainer within sandbox \"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 19:45:35.904613 systemd-networkd[1592]: calibf108b327e5: Gained IPv6LL Feb 12 19:45:36.074016 env[1420]: time="2024-02-12T19:45:36.073965297Z" level=info msg="CreateContainer within sandbox \"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0cdbb91bba53d861dbc67cdb1685071ed89726fd479d3e10ab3e5a4d22b33e18\"" Feb 12 19:45:36.074654 env[1420]: time="2024-02-12T19:45:36.074619100Z" level=info msg="StartContainer for \"0cdbb91bba53d861dbc67cdb1685071ed89726fd479d3e10ab3e5a4d22b33e18\"" Feb 12 19:45:36.150290 env[1420]: time="2024-02-12T19:45:36.150253393Z" level=info msg="StartContainer for \"0cdbb91bba53d861dbc67cdb1685071ed89726fd479d3e10ab3e5a4d22b33e18\" returns successfully" Feb 12 19:45:36.933162 systemd[1]: run-containerd-runc-k8s.io-0cdbb91bba53d861dbc67cdb1685071ed89726fd479d3e10ab3e5a4d22b33e18-runc.yXYf4E.mount: Deactivated successfully. Feb 12 19:45:48.130937 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.tI5FD2.mount: Deactivated successfully. Feb 12 19:45:52.772952 env[1420]: time="2024-02-12T19:45:52.772905864Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:52.780552 env[1420]: time="2024-02-12T19:45:52.780512386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:52.788260 env[1420]: time="2024-02-12T19:45:52.788215211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:52.798892 env[1420]: time="2024-02-12T19:45:52.798841082Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:52.799717 env[1420]: time="2024-02-12T19:45:52.799681095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 12 19:45:52.801196 env[1420]: time="2024-02-12T19:45:52.801164219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 19:45:52.802674 env[1420]: time="2024-02-12T19:45:52.802640443Z" level=info msg="CreateContainer within sandbox \"248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 19:45:52.834674 env[1420]: time="2024-02-12T19:45:52.834632759Z" level=info msg="CreateContainer within sandbox \"248a2671daa778fb5582da0b0f0f57ec5c777716fa3e0e31e74ad5d03ceacb08\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35\"" Feb 12 19:45:52.835417 env[1420]: time="2024-02-12T19:45:52.835385271Z" level=info msg="StartContainer for \"e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35\"" Feb 12 19:45:52.943361 env[1420]: time="2024-02-12T19:45:52.942154393Z" level=info msg="StartContainer for \"e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35\" returns successfully" Feb 12 19:45:53.366779 env[1420]: time="2024-02-12T19:45:53.366731675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:53.379098 kubelet[2648]: I0212 19:45:53.379021 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d78fbdbdf-z87k7" podStartSLOduration=-9.22337201447581e+09 pod.CreationTimestamp="2024-02-12 19:45:31 +0000 UTC" firstStartedPulling="2024-02-12 19:45:34.011136258 +0000 UTC m=+100.181964094" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:53.378552264 +0000 UTC m=+119.549380200" watchObservedRunningTime="2024-02-12 19:45:53.37896717 +0000 UTC m=+119.549795006" Feb 12 19:45:53.386399 env[1420]: time="2024-02-12T19:45:53.385375673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:53.397214 env[1420]: time="2024-02-12T19:45:53.397186061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:53.403068 env[1420]: time="2024-02-12T19:45:53.403038454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:53.403584 env[1420]: time="2024-02-12T19:45:53.403553862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 12 19:45:53.406115 env[1420]: time="2024-02-12T19:45:53.406083803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 19:45:53.407448 env[1420]: time="2024-02-12T19:45:53.407417224Z" level=info msg="CreateContainer within sandbox \"70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 19:45:53.471000 audit[5553]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:53.477439 env[1420]: time="2024-02-12T19:45:53.477389840Z" level=info msg="CreateContainer within sandbox \"70a01b5f84ec7019820afaff46b0f07efaf4eab984011289af2d512627c82bcd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65\"" Feb 12 19:45:53.478377 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 12 19:45:53.478450 kernel: audit: type=1325 audit(1707767153.471:338): table=filter:140 family=2 entries=8 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:53.478477 env[1420]: time="2024-02-12T19:45:53.478326555Z" level=info msg="StartContainer for \"069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65\"" Feb 12 19:45:53.471000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff692e98e0 a2=0 a3=7fff692e98cc items=0 ppid=2844 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:53.471000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:53.523238 kernel: audit: type=1300 audit(1707767153.471:338): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff692e98e0 a2=0 a3=7fff692e98cc items=0 ppid=2844 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:53.523367 kernel: audit: type=1327 audit(1707767153.471:338): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:53.511000 audit[5553]: NETFILTER_CFG table=nat:141 family=2 entries=78 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:53.534828 kernel: audit: type=1325 audit(1707767153.511:339): table=nat:141 family=2 entries=78 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:53.534886 kernel: audit: type=1300 audit(1707767153.511:339): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff692e98e0 a2=0 a3=7fff692e98cc items=0 ppid=2844 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:53.511000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff692e98e0 a2=0 a3=7fff692e98cc items=0 ppid=2844 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:53.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:53.569372 kernel: audit: type=1327 audit(1707767153.511:339): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:53.588831 env[1420]: time="2024-02-12T19:45:53.588786017Z" level=info msg="StartContainer for \"069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65\" returns successfully" Feb 12 19:45:54.227483 env[1420]: time="2024-02-12T19:45:54.227433566Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.258 [WARNING][5606] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7deb942c-192b-42b5-8511-5e8ab4d0a3b5", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b", Pod:"csi-node-driver-8z7w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4ce175505ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.258 [INFO][5606] k8s.go 578: Cleaning up netns ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.258 [INFO][5606] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" iface="eth0" netns="" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.258 [INFO][5606] k8s.go 585: Releasing IP address(es) ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.258 [INFO][5606] utils.go 188: Calico CNI releasing IP address ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.276 [INFO][5612] ipam_plugin.go 415: Releasing address using handleID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.276 [INFO][5612] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.276 [INFO][5612] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.282 [WARNING][5612] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.282 [INFO][5612] ipam_plugin.go 443: Releasing address using workloadID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.283 [INFO][5612] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.285192 env[1420]: 2024-02-12 19:45:54.284 [INFO][5606] k8s.go 591: Teardown processing complete. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.285192 env[1420]: time="2024-02-12T19:45:54.285182477Z" level=info msg="TearDown network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" successfully" Feb 12 19:45:54.286117 env[1420]: time="2024-02-12T19:45:54.285218678Z" level=info msg="StopPodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" returns successfully" Feb 12 19:45:54.286117 env[1420]: time="2024-02-12T19:45:54.285758386Z" level=info msg="RemovePodSandbox for \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:45:54.286117 env[1420]: time="2024-02-12T19:45:54.285795087Z" level=info msg="Forcibly stopping sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\"" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.316 [WARNING][5631] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7deb942c-192b-42b5-8511-5e8ab4d0a3b5", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b", Pod:"csi-node-driver-8z7w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4ce175505ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.316 [INFO][5631] k8s.go 578: Cleaning up netns ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.316 [INFO][5631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" iface="eth0" netns="" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.316 [INFO][5631] k8s.go 585: Releasing IP address(es) ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.316 [INFO][5631] utils.go 188: Calico CNI releasing IP address ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.333 [INFO][5637] ipam_plugin.go 415: Releasing address using handleID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.334 [INFO][5637] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.334 [INFO][5637] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.339 [WARNING][5637] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.339 [INFO][5637] ipam_plugin.go 443: Releasing address using workloadID ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" HandleID="k8s-pod-network.553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-csi--node--driver--8z7w9-eth0" Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.340 [INFO][5637] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.342264 env[1420]: 2024-02-12 19:45:54.341 [INFO][5631] k8s.go 591: Teardown processing complete. ContainerID="553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217" Feb 12 19:45:54.342916 env[1420]: time="2024-02-12T19:45:54.342406480Z" level=info msg="TearDown network for sandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" successfully" Feb 12 19:45:54.354018 env[1420]: time="2024-02-12T19:45:54.353972063Z" level=info msg="RemovePodSandbox \"553aa8f9a374ccd49b7355b1132b797363aa502c9ed27f8994ae15e4c330b217\" returns successfully" Feb 12 19:45:54.354407 env[1420]: time="2024-02-12T19:45:54.354379469Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:45:54.387128 kubelet[2648]: I0212 19:45:54.381098 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d78fbdbdf-bgtlc" podStartSLOduration=-9.223372013473717e+09 pod.CreationTimestamp="2024-02-12 19:45:31 +0000 UTC" firstStartedPulling="2024-02-12 19:45:34.23677374 +0000 UTC m=+100.407601676" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:54.379848371 +0000 UTC m=+120.550676307" watchObservedRunningTime="2024-02-12 19:45:54.38105869 +0000 UTC m=+120.551886526" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.402 [WARNING][5657] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0", GenerateName:"calico-kube-controllers-6c6846549b-", Namespace:"calico-system", SelfLink:"", UID:"7fcdb66b-2bf1-45a1-90f0-a496bd670686", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6846549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e", Pod:"calico-kube-controllers-6c6846549b-nwjfr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif49bce17b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.402 [INFO][5657] k8s.go 578: Cleaning up netns ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.402 [INFO][5657] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" iface="eth0" netns="" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.402 [INFO][5657] k8s.go 585: Releasing IP address(es) ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.402 [INFO][5657] utils.go 188: Calico CNI releasing IP address ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.426 [INFO][5668] ipam_plugin.go 415: Releasing address using handleID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.426 [INFO][5668] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.426 [INFO][5668] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.435 [WARNING][5668] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.435 [INFO][5668] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.437 [INFO][5668] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.438823 env[1420]: 2024-02-12 19:45:54.437 [INFO][5657] k8s.go 591: Teardown processing complete. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.439776 env[1420]: time="2024-02-12T19:45:54.439712116Z" level=info msg="TearDown network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" successfully" Feb 12 19:45:54.439943 env[1420]: time="2024-02-12T19:45:54.439921319Z" level=info msg="StopPodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" returns successfully" Feb 12 19:45:54.440563 env[1420]: time="2024-02-12T19:45:54.440538129Z" level=info msg="RemovePodSandbox for \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:45:54.440757 env[1420]: time="2024-02-12T19:45:54.440688231Z" level=info msg="Forcibly stopping sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\"" Feb 12 19:45:54.464000 audit[5714]: NETFILTER_CFG table=filter:142 family=2 entries=8 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:54.479384 kernel: audit: type=1325 audit(1707767154.464:340): table=filter:142 family=2 entries=8 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:54.464000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd16934b60 a2=0 a3=7ffd16934b4c items=0 ppid=2844 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:54.504397 kernel: audit: type=1300 audit(1707767154.464:340): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd16934b60 a2=0 a3=7ffd16934b4c items=0 ppid=2844 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:54.525960 kernel: audit: type=1327 audit(1707767154.464:340): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:54.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:54.464000 audit[5714]: NETFILTER_CFG table=nat:143 family=2 entries=78 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:54.539398 kernel: audit: type=1325 audit(1707767154.464:341): table=nat:143 family=2 entries=78 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:45:54.464000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd16934b60 a2=0 a3=7ffd16934b4c items=0 ppid=2844 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:54.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.520 [WARNING][5707] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0", GenerateName:"calico-kube-controllers-6c6846549b-", Namespace:"calico-system", SelfLink:"", UID:"7fcdb66b-2bf1-45a1-90f0-a496bd670686", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6846549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"06dcec484cb356cfefa77f232d7f5c10b8be59a6be7486e59887d0a84d98c90e", Pod:"calico-kube-controllers-6c6846549b-nwjfr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif49bce17b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.525 [INFO][5707] k8s.go 578: Cleaning up netns ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.525 [INFO][5707] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" iface="eth0" netns="" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.525 [INFO][5707] k8s.go 585: Releasing IP address(es) ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.525 [INFO][5707] utils.go 188: Calico CNI releasing IP address ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.574 [INFO][5716] ipam_plugin.go 415: Releasing address using handleID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.574 [INFO][5716] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.574 [INFO][5716] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.581 [WARNING][5716] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.581 [INFO][5716] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" HandleID="k8s-pod-network.9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-calico--kube--controllers--6c6846549b--nwjfr-eth0" Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.583 [INFO][5716] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.586882 env[1420]: 2024-02-12 19:45:54.584 [INFO][5707] k8s.go 591: Teardown processing complete. ContainerID="9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68" Feb 12 19:45:54.587402 env[1420]: time="2024-02-12T19:45:54.587376746Z" level=info msg="TearDown network for sandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" successfully" Feb 12 19:45:54.601858 env[1420]: time="2024-02-12T19:45:54.601822374Z" level=info msg="RemovePodSandbox \"9d27f27e802db36b37f861f91d7f8649b235d78eda79a57f4c62b60cd7caef68\" returns successfully" Feb 12 19:45:54.602377 env[1420]: time="2024-02-12T19:45:54.602329582Z" level=info msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.632 [WARNING][5735] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"4f0b1352-378c-4dae-bd2a-c8486e2500ed", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14", Pod:"coredns-787d4945fb-ws7rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51a79a28a68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.632 [INFO][5735] k8s.go 578: Cleaning up netns ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.632 [INFO][5735] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" iface="eth0" netns="" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.632 [INFO][5735] k8s.go 585: Releasing IP address(es) ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.632 [INFO][5735] utils.go 188: Calico CNI releasing IP address ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.650 [INFO][5742] ipam_plugin.go 415: Releasing address using handleID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.650 [INFO][5742] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.650 [INFO][5742] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.656 [WARNING][5742] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.656 [INFO][5742] ipam_plugin.go 443: Releasing address using workloadID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.657 [INFO][5742] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.659329 env[1420]: 2024-02-12 19:45:54.658 [INFO][5735] k8s.go 591: Teardown processing complete. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.660106 env[1420]: time="2024-02-12T19:45:54.659385383Z" level=info msg="TearDown network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" successfully" Feb 12 19:45:54.660106 env[1420]: time="2024-02-12T19:45:54.659424483Z" level=info msg="StopPodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" returns successfully" Feb 12 19:45:54.660106 env[1420]: time="2024-02-12T19:45:54.659823190Z" level=info msg="RemovePodSandbox for \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" Feb 12 19:45:54.660106 env[1420]: time="2024-02-12T19:45:54.659857490Z" level=info msg="Forcibly stopping sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\"" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.691 [WARNING][5762] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"4f0b1352-378c-4dae-bd2a-c8486e2500ed", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"5cbf2140d3ece04aceefe49ffef8f007ae086e313b27901f8ef22cd4187d1d14", Pod:"coredns-787d4945fb-ws7rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51a79a28a68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.692 [INFO][5762] k8s.go 578: Cleaning up netns ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.692 [INFO][5762] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" iface="eth0" netns="" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.692 [INFO][5762] k8s.go 585: Releasing IP address(es) ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.692 [INFO][5762] utils.go 188: Calico CNI releasing IP address ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.710 [INFO][5768] ipam_plugin.go 415: Releasing address using handleID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.710 [INFO][5768] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.710 [INFO][5768] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.715 [WARNING][5768] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.715 [INFO][5768] ipam_plugin.go 443: Releasing address using workloadID ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" HandleID="k8s-pod-network.84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--ws7rt-eth0" Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.716 [INFO][5768] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.718435 env[1420]: 2024-02-12 19:45:54.717 [INFO][5762] k8s.go 591: Teardown processing complete. ContainerID="84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5" Feb 12 19:45:54.719089 env[1420]: time="2024-02-12T19:45:54.718462615Z" level=info msg="TearDown network for sandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" successfully" Feb 12 19:45:54.732569 env[1420]: time="2024-02-12T19:45:54.732479636Z" level=info msg="RemovePodSandbox \"84c69e2603ef5c21e1039cef2ef8757da503acfa134c8bdda8ee5d4509a0a1a5\" returns successfully" Feb 12 19:45:54.733234 env[1420]: time="2024-02-12T19:45:54.733200548Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.780 [WARNING][5786] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8", Pod:"coredns-787d4945fb-j8xlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0503685bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.780 [INFO][5786] k8s.go 578: Cleaning up netns ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.781 [INFO][5786] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" iface="eth0" netns="" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.781 [INFO][5786] k8s.go 585: Releasing IP address(es) ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.781 [INFO][5786] utils.go 188: Calico CNI releasing IP address ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.799 [INFO][5792] ipam_plugin.go 415: Releasing address using handleID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.799 [INFO][5792] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.800 [INFO][5792] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.805 [WARNING][5792] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.805 [INFO][5792] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.806 [INFO][5792] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.810753 env[1420]: 2024-02-12 19:45:54.807 [INFO][5786] k8s.go 591: Teardown processing complete. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.810753 env[1420]: time="2024-02-12T19:45:54.808540637Z" level=info msg="TearDown network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" successfully" Feb 12 19:45:54.810753 env[1420]: time="2024-02-12T19:45:54.808575737Z" level=info msg="StopPodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" returns successfully" Feb 12 19:45:54.810753 env[1420]: time="2024-02-12T19:45:54.809074845Z" level=info msg="RemovePodSandbox for \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:45:54.810753 env[1420]: time="2024-02-12T19:45:54.809112446Z" level=info msg="Forcibly stopping sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\"" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.844 [WARNING][5811] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"57d1cdf4-1214-4bf4-ab2d-b4fcd203788b", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 44, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-c8dbf10a06", ContainerID:"2d0f7ca34aea2d2df00f5962a46cd6113b517b203114f3f02f1b0b07ecf81df8", Pod:"coredns-787d4945fb-j8xlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0503685bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.844 [INFO][5811] k8s.go 578: Cleaning up netns ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.844 [INFO][5811] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" iface="eth0" netns="" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.844 [INFO][5811] k8s.go 585: Releasing IP address(es) ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.844 [INFO][5811] utils.go 188: Calico CNI releasing IP address ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.861 [INFO][5817] ipam_plugin.go 415: Releasing address using handleID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.861 [INFO][5817] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.861 [INFO][5817] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.866 [WARNING][5817] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.866 [INFO][5817] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" HandleID="k8s-pod-network.8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Workload="ci--3510.3.2--a--c8dbf10a06-k8s-coredns--787d4945fb--j8xlt-eth0" Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.868 [INFO][5817] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:45:54.869850 env[1420]: 2024-02-12 19:45:54.868 [INFO][5811] k8s.go 591: Teardown processing complete. ContainerID="8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5" Feb 12 19:45:54.870556 env[1420]: time="2024-02-12T19:45:54.869878605Z" level=info msg="TearDown network for sandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" successfully" Feb 12 19:45:54.878478 env[1420]: time="2024-02-12T19:45:54.878436740Z" level=info msg="RemovePodSandbox \"8aa24b9061172a87a25995c37245004223425e3a0d10e3972d759b7ca6e43ca5\" returns successfully" Feb 12 19:45:55.857137 env[1420]: time="2024-02-12T19:45:55.857089041Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:55.866495 env[1420]: time="2024-02-12T19:45:55.866406787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:55.876280 env[1420]: time="2024-02-12T19:45:55.876189040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:55.884807 env[1420]: time="2024-02-12T19:45:55.884771274Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:55.885258 env[1420]: time="2024-02-12T19:45:55.885226981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 12 19:45:55.887326 env[1420]: time="2024-02-12T19:45:55.887296413Z" level=info msg="CreateContainer within sandbox \"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 19:45:55.916360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270401333.mount: Deactivated successfully. Feb 12 19:45:55.928691 env[1420]: time="2024-02-12T19:45:55.928656959Z" level=info msg="CreateContainer within sandbox \"2d8162ca945f5cab3a48a4253c51cf410de3e791cff398f5953e1097186e6f4b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d821c849f184e30823b6f4af55d32ef0802a5cc60fe6f18a010c0d342dd14ed5\"" Feb 12 19:45:55.929176 env[1420]: time="2024-02-12T19:45:55.929141566Z" level=info msg="StartContainer for \"d821c849f184e30823b6f4af55d32ef0802a5cc60fe6f18a010c0d342dd14ed5\"" Feb 12 19:45:56.009354 env[1420]: time="2024-02-12T19:45:56.009252616Z" level=info msg="StartContainer for \"d821c849f184e30823b6f4af55d32ef0802a5cc60fe6f18a010c0d342dd14ed5\" returns successfully" Feb 12 19:45:56.227394 kubelet[2648]: I0212 19:45:56.227364 2648 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 19:45:56.227958 kubelet[2648]: I0212 19:45:56.227455 2648 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 19:45:56.392326 kubelet[2648]: I0212 19:45:56.392274 2648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8z7w9" podStartSLOduration=-9.223371936462543e+09 pod.CreationTimestamp="2024-02-12 19:44:16 +0000 UTC" firstStartedPulling="2024-02-12 19:45:22.025641049 +0000 UTC m=+88.196468885" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:56.391490522 +0000 UTC m=+122.562318358" watchObservedRunningTime="2024-02-12 19:45:56.392231933 +0000 UTC m=+122.563059769" Feb 12 19:45:59.533144 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.mOCjxf.mount: Deactivated successfully. Feb 12 19:46:02.024418 systemd[1]: run-containerd-runc-k8s.io-e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35-runc.Jeu8AO.mount: Deactivated successfully. Feb 12 19:46:02.169403 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 12 19:46:02.169530 kernel: audit: type=1325 audit(1707767162.163:342): table=filter:144 family=2 entries=7 op=nft_register_rule pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.163000 audit[5949]: NETFILTER_CFG table=filter:144 family=2 entries=7 op=nft_register_rule pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.163000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcb1404a50 a2=0 a3=7ffcb1404a3c items=0 ppid=2844 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.202152 kernel: audit: type=1300 audit(1707767162.163:342): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcb1404a50 a2=0 a3=7ffcb1404a3c items=0 ppid=2844 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.212999 kernel: audit: type=1327 audit(1707767162.163:342): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.213098 kernel: audit: type=1325 audit(1707767162.163:343): table=nat:145 family=2 entries=85 op=nft_register_chain pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.163000 audit[5949]: NETFILTER_CFG table=nat:145 family=2 entries=85 op=nft_register_chain pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.163000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffcb1404a50 a2=0 a3=7ffcb1404a3c items=0 ppid=2844 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.258538 kernel: audit: type=1300 audit(1707767162.163:343): arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffcb1404a50 a2=0 a3=7ffcb1404a3c items=0 ppid=2844 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.258627 kernel: audit: type=1327 audit(1707767162.163:343): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.258653 kernel: audit: type=1325 audit(1707767162.221:344): table=filter:146 family=2 entries=6 op=nft_register_rule pid=5975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.221000 audit[5975]: NETFILTER_CFG table=filter:146 family=2 entries=6 op=nft_register_rule pid=5975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.221000 audit[5975]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcdc8cc580 a2=0 a3=7ffcdc8cc56c items=0 ppid=2844 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.288738 kernel: audit: type=1300 audit(1707767162.221:344): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcdc8cc580 a2=0 a3=7ffcdc8cc56c items=0 ppid=2844 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.288819 kernel: audit: type=1327 audit(1707767162.221:344): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:02.299363 kernel: audit: type=1325 audit(1707767162.231:345): table=nat:147 family=2 entries=92 op=nft_register_chain pid=5975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.231000 audit[5975]: NETFILTER_CFG table=nat:147 family=2 entries=92 op=nft_register_chain pid=5975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:46:02.231000 audit[5975]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffcdc8cc580 a2=0 a3=7ffcdc8cc56c items=0 ppid=2844 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:02.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:46:18.154555 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.Bm7kNF.mount: Deactivated successfully. Feb 12 19:46:29.532785 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.URejFl.mount: Deactivated successfully. Feb 12 19:46:32.028069 systemd[1]: run-containerd-runc-k8s.io-e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35-runc.bot9Px.mount: Deactivated successfully. Feb 12 19:46:32.061872 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.cubDK1.mount: Deactivated successfully. Feb 12 19:46:48.131841 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.KQhUmd.mount: Deactivated successfully. Feb 12 19:46:59.532287 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.g1J82O.mount: Deactivated successfully. Feb 12 19:47:02.018746 systemd[1]: run-containerd-runc-k8s.io-e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35-runc.CU4Fh4.mount: Deactivated successfully. Feb 12 19:47:02.058765 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.NAaQIK.mount: Deactivated successfully. Feb 12 19:47:08.345419 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 12 19:47:08.345576 kernel: audit: type=1130 audit(1707767228.320:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.35:22-10.200.12.6:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:08.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.35:22-10.200.12.6:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:08.321255 systemd[1]: Started sshd@7-10.200.8.35:22-10.200.12.6:35092.service. Feb 12 19:47:08.937000 audit[6197]: USER_ACCT pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:08.958283 sshd[6197]: Accepted publickey for core from 10.200.12.6 port 35092 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:08.958715 kernel: audit: type=1101 audit(1707767228.937:347): pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:08.958758 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:08.956000 audit[6197]: CRED_ACQ pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:08.966534 systemd-logind[1403]: New session 10 of user core. Feb 12 19:47:08.967815 systemd[1]: Started session-10.scope. Feb 12 19:47:08.988921 kernel: audit: type=1103 audit(1707767228.956:348): pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:08.989056 kernel: audit: type=1006 audit(1707767228.956:349): pid=6197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 12 19:47:08.989092 kernel: audit: type=1300 audit(1707767228.956:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03be4700 a2=3 a3=0 items=0 ppid=1 pid=6197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:08.956000 audit[6197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03be4700 a2=3 a3=0 items=0 ppid=1 pid=6197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:08.956000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:09.008412 kernel: audit: type=1327 audit(1707767228.956:349): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:08.976000 audit[6197]: USER_START pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.014353 kernel: audit: type=1105 audit(1707767228.976:350): pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:08.976000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.048183 kernel: audit: type=1103 audit(1707767228.976:351): pid=6200 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.456537 sshd[6197]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:09.456000 audit[6197]: USER_END pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.460431 systemd-logind[1403]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:47:09.462035 systemd[1]: sshd@7-10.200.8.35:22-10.200.12.6:35092.service: Deactivated successfully. Feb 12 19:47:09.463030 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:47:09.464917 systemd-logind[1403]: Removed session 10. Feb 12 19:47:09.457000 audit[6197]: CRED_DISP pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.497417 kernel: audit: type=1106 audit(1707767229.456:352): pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.497498 kernel: audit: type=1104 audit(1707767229.457:353): pid=6197 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:09.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.35:22-10.200.12.6:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:14.562270 systemd[1]: Started sshd@8-10.200.8.35:22-10.200.12.6:35102.service. Feb 12 19:47:14.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.35:22-10.200.12.6:35102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:14.568610 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:14.568690 kernel: audit: type=1130 audit(1707767234.561:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.35:22-10.200.12.6:35102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:15.194000 audit[6232]: USER_ACCT pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.197484 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:15.194000 audit[6232]: CRED_ACQ pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.215166 sshd[6232]: Accepted publickey for core from 10.200.12.6 port 35102 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:15.215363 kernel: audit: type=1101 audit(1707767235.194:356): pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.215409 kernel: audit: type=1103 audit(1707767235.194:357): pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.220863 systemd-logind[1403]: New session 11 of user core. Feb 12 19:47:15.221590 systemd[1]: Started session-11.scope. Feb 12 19:47:15.194000 audit[6232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5befe130 a2=3 a3=0 items=0 ppid=1 pid=6232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:15.244358 kernel: audit: type=1006 audit(1707767235.194:358): pid=6232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 12 19:47:15.244401 kernel: audit: type=1300 audit(1707767235.194:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5befe130 a2=3 a3=0 items=0 ppid=1 pid=6232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:15.263428 kernel: audit: type=1327 audit(1707767235.194:358): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:15.194000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:15.230000 audit[6232]: USER_START pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.269361 kernel: audit: type=1105 audit(1707767235.230:359): pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.232000 audit[6236]: CRED_ACQ pid=6236 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.303410 kernel: audit: type=1103 audit(1707767235.232:360): pid=6236 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.690294 sshd[6232]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:15.690000 audit[6232]: USER_END pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.693792 systemd[1]: sshd@8-10.200.8.35:22-10.200.12.6:35102.service: Deactivated successfully. Feb 12 19:47:15.694851 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:47:15.698914 systemd-logind[1403]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:47:15.699819 systemd-logind[1403]: Removed session 11. Feb 12 19:47:15.711364 kernel: audit: type=1106 audit(1707767235.690:361): pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.711449 kernel: audit: type=1104 audit(1707767235.690:362): pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.690000 audit[6232]: CRED_DISP pid=6232 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:15.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.35:22-10.200.12.6:35102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:18.135581 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.nNishz.mount: Deactivated successfully. Feb 12 19:47:20.795066 systemd[1]: Started sshd@9-10.200.8.35:22-10.200.12.6:51914.service. Feb 12 19:47:20.818913 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:20.819022 kernel: audit: type=1130 audit(1707767240.794:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.35:22-10.200.12.6:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:20.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.35:22-10.200.12.6:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:21.416000 audit[6266]: USER_ACCT pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.418054 sshd[6266]: Accepted publickey for core from 10.200.12.6 port 51914 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:21.435000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.437564 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:21.448492 systemd[1]: Started session-12.scope. Feb 12 19:47:21.449971 systemd-logind[1403]: New session 12 of user core. Feb 12 19:47:21.455463 kernel: audit: type=1101 audit(1707767241.416:365): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.455551 kernel: audit: type=1103 audit(1707767241.435:366): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.478214 kernel: audit: type=1006 audit(1707767241.435:367): pid=6266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 12 19:47:21.478280 kernel: audit: type=1300 audit(1707767241.435:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0ca7aa10 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:21.435000 audit[6266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0ca7aa10 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:21.435000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:21.489393 kernel: audit: type=1327 audit(1707767241.435:367): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:21.455000 audit[6266]: USER_START pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.515127 kernel: audit: type=1105 audit(1707767241.455:368): pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.457000 audit[6272]: CRED_ACQ pid=6272 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.531313 kernel: audit: type=1103 audit(1707767241.457:369): pid=6272 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.916629 sshd[6266]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:21.916000 audit[6266]: USER_END pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.919950 systemd-logind[1403]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:47:21.921302 systemd[1]: sshd@9-10.200.8.35:22-10.200.12.6:51914.service: Deactivated successfully. Feb 12 19:47:21.922137 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:47:21.923868 systemd-logind[1403]: Removed session 12. Feb 12 19:47:21.937359 kernel: audit: type=1106 audit(1707767241.916:370): pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.937433 kernel: audit: type=1104 audit(1707767241.916:371): pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.916000 audit[6266]: CRED_DISP pid=6266 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:21.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.35:22-10.200.12.6:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:27.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.35:22-10.200.12.6:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:27.023859 systemd[1]: Started sshd@10-10.200.8.35:22-10.200.12.6:34472.service. Feb 12 19:47:27.028867 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:27.028945 kernel: audit: type=1130 audit(1707767247.022:373): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.35:22-10.200.12.6:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:27.658000 audit[6286]: USER_ACCT pid=6286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.660239 sshd[6286]: Accepted publickey for core from 10.200.12.6 port 34472 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:27.677000 audit[6286]: CRED_ACQ pid=6286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.679407 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:27.684831 systemd[1]: Started session-13.scope. Feb 12 19:47:27.685906 systemd-logind[1403]: New session 13 of user core. Feb 12 19:47:27.697448 kernel: audit: type=1101 audit(1707767247.658:374): pid=6286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.697517 kernel: audit: type=1103 audit(1707767247.677:375): pid=6286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.708392 kernel: audit: type=1006 audit(1707767247.677:376): pid=6286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 12 19:47:27.677000 audit[6286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec4a245b0 a2=3 a3=0 items=0 ppid=1 pid=6286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:27.727848 kernel: audit: type=1300 audit(1707767247.677:376): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec4a245b0 a2=3 a3=0 items=0 ppid=1 pid=6286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:27.727918 kernel: audit: type=1327 audit(1707767247.677:376): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:27.677000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:27.688000 audit[6286]: USER_START pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.733354 kernel: audit: type=1105 audit(1707767247.688:377): pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.696000 audit[6288]: CRED_ACQ pid=6288 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:27.768320 kernel: audit: type=1103 audit(1707767247.696:378): pid=6288 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:28.156321 sshd[6286]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:28.156000 audit[6286]: USER_END pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:28.159556 systemd-logind[1403]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:47:28.160937 systemd[1]: sshd@10-10.200.8.35:22-10.200.12.6:34472.service: Deactivated successfully. Feb 12 19:47:28.161791 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:47:28.163222 systemd-logind[1403]: Removed session 13. Feb 12 19:47:28.156000 audit[6286]: CRED_DISP pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:28.192924 kernel: audit: type=1106 audit(1707767248.156:379): pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:28.192992 kernel: audit: type=1104 audit(1707767248.156:380): pid=6286 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:28.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.35:22-10.200.12.6:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:29.539481 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.EwZ41f.mount: Deactivated successfully. Feb 12 19:47:32.059153 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.sEBhwT.mount: Deactivated successfully. Feb 12 19:47:33.264362 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:33.264520 kernel: audit: type=1130 audit(1707767253.258:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.35:22-10.200.12.6:34478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:33.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.35:22-10.200.12.6:34478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:33.260007 systemd[1]: Started sshd@11-10.200.8.35:22-10.200.12.6:34478.service. Feb 12 19:47:33.873594 sshd[6360]: Accepted publickey for core from 10.200.12.6 port 34478 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:33.872000 audit[6360]: USER_ACCT pid=6360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.895390 kernel: audit: type=1101 audit(1707767253.872:383): pid=6360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.876117 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:33.874000 audit[6360]: CRED_ACQ pid=6360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.900603 systemd[1]: Started session-14.scope. Feb 12 19:47:33.901709 systemd-logind[1403]: New session 14 of user core. Feb 12 19:47:33.914978 kernel: audit: type=1103 audit(1707767253.874:384): pid=6360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.874000 audit[6360]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffade63d60 a2=3 a3=0 items=0 ppid=1 pid=6360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:33.927356 kernel: audit: type=1006 audit(1707767253.874:385): pid=6360 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 12 19:47:33.927397 kernel: audit: type=1300 audit(1707767253.874:385): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffade63d60 a2=3 a3=0 items=0 ppid=1 pid=6360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:33.946117 kernel: audit: type=1327 audit(1707767253.874:385): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:33.874000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:33.912000 audit[6360]: USER_START pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.970099 kernel: audit: type=1105 audit(1707767253.912:386): pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.970169 kernel: audit: type=1103 audit(1707767253.914:387): pid=6364 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:33.914000 audit[6364]: CRED_ACQ pid=6364 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:34.388467 sshd[6360]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:34.388000 audit[6360]: USER_END pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:34.392371 systemd[1]: sshd@11-10.200.8.35:22-10.200.12.6:34478.service: Deactivated successfully. Feb 12 19:47:34.394315 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:47:34.395022 systemd-logind[1403]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:47:34.395953 systemd-logind[1403]: Removed session 14. Feb 12 19:47:34.389000 audit[6360]: CRED_DISP pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:34.426116 kernel: audit: type=1106 audit(1707767254.388:388): pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:34.426194 kernel: audit: type=1104 audit(1707767254.389:389): pid=6360 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:34.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.35:22-10.200.12.6:34478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:34.513596 systemd[1]: Started sshd@12-10.200.8.35:22-10.200.12.6:34484.service. Feb 12 19:47:34.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.35:22-10.200.12.6:34484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:35.135000 audit[6376]: USER_ACCT pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:35.136650 sshd[6376]: Accepted publickey for core from 10.200.12.6 port 34484 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:35.136000 audit[6376]: CRED_ACQ pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:35.136000 audit[6376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb236ff80 a2=3 a3=0 items=0 ppid=1 pid=6376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:35.136000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:35.138127 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:35.143178 systemd[1]: Started session-15.scope. Feb 12 19:47:35.143931 systemd-logind[1403]: New session 15 of user core. Feb 12 19:47:35.148000 audit[6376]: USER_START pid=6376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:35.150000 audit[6379]: CRED_ACQ pid=6379 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:36.569809 sshd[6376]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:36.570000 audit[6376]: USER_END pid=6376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:36.570000 audit[6376]: CRED_DISP pid=6376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:36.573900 systemd[1]: sshd@12-10.200.8.35:22-10.200.12.6:34484.service: Deactivated successfully. Feb 12 19:47:36.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.35:22-10.200.12.6:34484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:36.575514 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:47:36.575547 systemd-logind[1403]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:47:36.576749 systemd-logind[1403]: Removed session 15. Feb 12 19:47:36.673786 systemd[1]: Started sshd@13-10.200.8.35:22-10.200.12.6:34498.service. Feb 12 19:47:36.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.35:22-10.200.12.6:34498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:37.298000 audit[6388]: USER_ACCT pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.299633 sshd[6388]: Accepted publickey for core from 10.200.12.6 port 34498 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:37.299000 audit[6388]: CRED_ACQ pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.299000 audit[6388]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf0af8970 a2=3 a3=0 items=0 ppid=1 pid=6388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:37.299000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:37.301070 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:37.305850 systemd-logind[1403]: New session 16 of user core. Feb 12 19:47:37.306559 systemd[1]: Started session-16.scope. Feb 12 19:47:37.311000 audit[6388]: USER_START pid=6388 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.313000 audit[6391]: CRED_ACQ pid=6391 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.793400 sshd[6388]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:37.793000 audit[6388]: USER_END pid=6388 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.793000 audit[6388]: CRED_DISP pid=6388 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:37.796013 systemd[1]: sshd@13-10.200.8.35:22-10.200.12.6:34498.service: Deactivated successfully. Feb 12 19:47:37.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.35:22-10.200.12.6:34498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:37.797424 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:47:37.797451 systemd-logind[1403]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:47:37.799024 systemd-logind[1403]: Removed session 16. Feb 12 19:47:42.919608 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:47:42.919792 kernel: audit: type=1130 audit(1707767262.896:409): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.35:22-10.200.12.6:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:42.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.35:22-10.200.12.6:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:42.897416 systemd[1]: Started sshd@14-10.200.8.35:22-10.200.12.6:45076.service. Feb 12 19:47:43.511000 audit[6403]: USER_ACCT pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.533585 sshd[6403]: Accepted publickey for core from 10.200.12.6 port 45076 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:43.533975 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:43.534481 kernel: audit: type=1101 audit(1707767263.511:410): pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.534546 kernel: audit: type=1103 audit(1707767263.532:411): pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.532000 audit[6403]: CRED_ACQ pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.539748 systemd-logind[1403]: New session 17 of user core. Feb 12 19:47:43.540563 systemd[1]: Started session-17.scope. Feb 12 19:47:43.563370 kernel: audit: type=1006 audit(1707767263.532:412): pid=6403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 12 19:47:43.563446 kernel: audit: type=1300 audit(1707767263.532:412): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe93fb250 a2=3 a3=0 items=0 ppid=1 pid=6403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:43.532000 audit[6403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe93fb250 a2=3 a3=0 items=0 ppid=1 pid=6403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:43.532000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:43.586554 kernel: audit: type=1327 audit(1707767263.532:412): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:43.539000 audit[6403]: USER_START pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.605430 kernel: audit: type=1105 audit(1707767263.539:413): pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.544000 audit[6405]: CRED_ACQ pid=6405 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:43.621382 kernel: audit: type=1103 audit(1707767263.544:414): pid=6405 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:44.021606 sshd[6403]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:44.021000 audit[6403]: USER_END pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:44.024974 systemd[1]: sshd@14-10.200.8.35:22-10.200.12.6:45076.service: Deactivated successfully. Feb 12 19:47:44.026010 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:47:44.033421 systemd-logind[1403]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:47:44.034326 systemd-logind[1403]: Removed session 17. Feb 12 19:47:44.022000 audit[6403]: CRED_DISP pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:44.044361 kernel: audit: type=1106 audit(1707767264.021:415): pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:44.044415 kernel: audit: type=1104 audit(1707767264.022:416): pid=6403 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:44.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.35:22-10.200.12.6:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:48.130170 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.8Cj9rc.mount: Deactivated successfully. Feb 12 19:47:49.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.35:22-10.200.12.6:59990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:49.123888 systemd[1]: Started sshd@15-10.200.8.35:22-10.200.12.6:59990.service. Feb 12 19:47:49.128895 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:49.128976 kernel: audit: type=1130 audit(1707767269.122:418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.35:22-10.200.12.6:59990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:49.735000 audit[6444]: USER_ACCT pid=6444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.737023 sshd[6444]: Accepted publickey for core from 10.200.12.6 port 59990 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:49.755000 audit[6444]: CRED_ACQ pid=6444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.757561 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:49.762738 systemd[1]: Started session-18.scope. Feb 12 19:47:49.764258 systemd-logind[1403]: New session 18 of user core. Feb 12 19:47:49.775692 kernel: audit: type=1101 audit(1707767269.735:419): pid=6444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.775760 kernel: audit: type=1103 audit(1707767269.755:420): pid=6444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.775788 kernel: audit: type=1006 audit(1707767269.755:421): pid=6444 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 12 19:47:49.755000 audit[6444]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffde066be0 a2=3 a3=0 items=0 ppid=1 pid=6444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:49.787351 kernel: audit: type=1300 audit(1707767269.755:421): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffde066be0 a2=3 a3=0 items=0 ppid=1 pid=6444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:49.755000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:49.809940 kernel: audit: type=1327 audit(1707767269.755:421): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:49.767000 audit[6444]: USER_START pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.810385 kernel: audit: type=1105 audit(1707767269.767:422): pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.774000 audit[6453]: CRED_ACQ pid=6453 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:49.833354 kernel: audit: type=1103 audit(1707767269.774:423): pid=6453 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:50.255033 sshd[6444]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:50.255000 audit[6444]: USER_END pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:50.264487 systemd[1]: sshd@15-10.200.8.35:22-10.200.12.6:59990.service: Deactivated successfully. Feb 12 19:47:50.265294 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:47:50.266744 systemd-logind[1403]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:47:50.267657 systemd-logind[1403]: Removed session 18. Feb 12 19:47:50.284076 kernel: audit: type=1106 audit(1707767270.255:424): pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:50.284148 kernel: audit: type=1104 audit(1707767270.255:425): pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:50.255000 audit[6444]: CRED_DISP pid=6444 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:50.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.35:22-10.200.12.6:59990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:55.344602 systemd[1]: Started sshd@16-10.200.8.35:22-10.200.12.6:60004.service. Feb 12 19:47:55.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.35:22-10.200.12.6:60004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:55.351808 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:47:55.351874 kernel: audit: type=1130 audit(1707767275.343:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.35:22-10.200.12.6:60004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:55.964000 audit[6466]: USER_ACCT pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:55.967208 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:55.986413 kernel: audit: type=1101 audit(1707767275.964:428): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:55.986456 sshd[6466]: Accepted publickey for core from 10.200.12.6 port 60004 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:55.965000 audit[6466]: CRED_ACQ pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:55.991453 systemd[1]: Started session-19.scope. Feb 12 19:47:55.992397 systemd-logind[1403]: New session 19 of user core. Feb 12 19:47:56.004374 kernel: audit: type=1103 audit(1707767275.965:429): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:55.965000 audit[6466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec2f97110 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:56.032553 kernel: audit: type=1006 audit(1707767275.965:430): pid=6466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 12 19:47:56.032645 kernel: audit: type=1300 audit(1707767275.965:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec2f97110 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:47:56.035722 kernel: audit: type=1327 audit(1707767275.965:430): proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:55.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:47:55.995000 audit[6466]: USER_START pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.057841 kernel: audit: type=1105 audit(1707767275.995:431): pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:55.997000 audit[6468]: CRED_ACQ pid=6468 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.073735 kernel: audit: type=1103 audit(1707767275.997:432): pid=6468 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.468555 sshd[6466]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:56.468000 audit[6466]: USER_END pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.471358 systemd[1]: sshd@16-10.200.8.35:22-10.200.12.6:60004.service: Deactivated successfully. Feb 12 19:47:56.472233 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:47:56.473289 systemd-logind[1403]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:47:56.474078 systemd-logind[1403]: Removed session 19. Feb 12 19:47:56.468000 audit[6466]: CRED_DISP pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.505307 kernel: audit: type=1106 audit(1707767276.468:433): pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.505413 kernel: audit: type=1104 audit(1707767276.468:434): pid=6466 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:47:56.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.35:22-10.200.12.6:60004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:47:59.534724 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.AiLFxe.mount: Deactivated successfully. Feb 12 19:48:01.594611 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:48:01.594755 kernel: audit: type=1130 audit(1707767281.571:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.35:22-10.200.12.6:51754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:01.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.35:22-10.200.12.6:51754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:01.571747 systemd[1]: Started sshd@17-10.200.8.35:22-10.200.12.6:51754.service. Feb 12 19:48:02.021420 systemd[1]: run-containerd-runc-k8s.io-e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35-runc.KtCjir.mount: Deactivated successfully. Feb 12 19:48:02.067134 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.pBvKFR.mount: Deactivated successfully. Feb 12 19:48:02.191000 audit[6501]: USER_ACCT pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.192323 sshd[6501]: Accepted publickey for core from 10.200.12.6 port 51754 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:02.210973 kernel: audit: type=1101 audit(1707767282.191:437): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.211061 kernel: audit: type=1103 audit(1707767282.210:438): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.210000 audit[6501]: CRED_ACQ pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.210861 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:02.216084 systemd[1]: Started session-20.scope. Feb 12 19:48:02.217049 systemd-logind[1403]: New session 20 of user core. Feb 12 19:48:02.239490 kernel: audit: type=1006 audit(1707767282.210:439): pid=6501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 12 19:48:02.239588 kernel: audit: type=1300 audit(1707767282.210:439): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1cbbc7e0 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:02.210000 audit[6501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1cbbc7e0 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:02.257438 kernel: audit: type=1327 audit(1707767282.210:439): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:02.210000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:02.221000 audit[6501]: USER_START pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.263357 kernel: audit: type=1105 audit(1707767282.221:440): pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.281432 kernel: audit: type=1103 audit(1707767282.229:441): pid=6543 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.229000 audit[6543]: CRED_ACQ pid=6543 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.706412 sshd[6501]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:02.707000 audit[6501]: USER_END pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.709773 systemd[1]: sshd@17-10.200.8.35:22-10.200.12.6:51754.service: Deactivated successfully. Feb 12 19:48:02.710806 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:48:02.722173 systemd-logind[1403]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:48:02.723100 systemd-logind[1403]: Removed session 20. Feb 12 19:48:02.707000 audit[6501]: CRED_DISP pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.744933 kernel: audit: type=1106 audit(1707767282.707:442): pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.745014 kernel: audit: type=1104 audit(1707767282.707:443): pid=6501 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:02.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.35:22-10.200.12.6:51754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:02.809760 systemd[1]: Started sshd@18-10.200.8.35:22-10.200.12.6:51766.service. Feb 12 19:48:02.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.35:22-10.200.12.6:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:03.423000 audit[6553]: USER_ACCT pid=6553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.424032 sshd[6553]: Accepted publickey for core from 10.200.12.6 port 51766 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:03.424000 audit[6553]: CRED_ACQ pid=6553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.424000 audit[6553]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd64de3d70 a2=3 a3=0 items=0 ppid=1 pid=6553 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:03.424000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:03.425746 sshd[6553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:03.431570 systemd[1]: Started session-21.scope. Feb 12 19:48:03.431833 systemd-logind[1403]: New session 21 of user core. Feb 12 19:48:03.437000 audit[6553]: USER_START pid=6553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.439000 audit[6556]: CRED_ACQ pid=6556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.956202 sshd[6553]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:03.957000 audit[6553]: USER_END pid=6553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.957000 audit[6553]: CRED_DISP pid=6553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:03.959687 systemd[1]: sshd@18-10.200.8.35:22-10.200.12.6:51766.service: Deactivated successfully. Feb 12 19:48:03.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.35:22-10.200.12.6:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:03.960843 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:48:03.961960 systemd-logind[1403]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:48:03.963063 systemd-logind[1403]: Removed session 21. Feb 12 19:48:04.059879 systemd[1]: Started sshd@19-10.200.8.35:22-10.200.12.6:51778.service. Feb 12 19:48:04.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.35:22-10.200.12.6:51778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:04.678000 audit[6563]: USER_ACCT pid=6563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:04.679129 sshd[6563]: Accepted publickey for core from 10.200.12.6 port 51778 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:04.679000 audit[6563]: CRED_ACQ pid=6563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:04.679000 audit[6563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef38d53a0 a2=3 a3=0 items=0 ppid=1 pid=6563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:04.679000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:04.680676 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:04.685784 systemd-logind[1403]: New session 22 of user core. Feb 12 19:48:04.686151 systemd[1]: Started session-22.scope. Feb 12 19:48:04.692000 audit[6563]: USER_START pid=6563 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:04.693000 audit[6566]: CRED_ACQ pid=6566 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:06.224000 audit[6601]: NETFILTER_CFG table=filter:148 family=2 entries=6 op=nft_register_rule pid=6601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:06.224000 audit[6601]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd2199aa10 a2=0 a3=7ffd2199a9fc items=0 ppid=2844 pid=6601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:06.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:06.226000 audit[6601]: NETFILTER_CFG table=nat:149 family=2 entries=94 op=nft_register_rule pid=6601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:06.226000 audit[6601]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd2199aa10 a2=0 a3=7ffd2199a9fc items=0 ppid=2844 pid=6601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:06.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:06.264000 audit[6627]: NETFILTER_CFG table=filter:150 family=2 entries=18 op=nft_register_rule pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:06.264000 audit[6627]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe5e802870 a2=0 a3=7ffe5e80285c items=0 ppid=2844 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:06.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:06.266000 audit[6627]: NETFILTER_CFG table=nat:151 family=2 entries=94 op=nft_register_rule pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:06.266000 audit[6627]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe5e802870 a2=0 a3=7ffe5e80285c items=0 ppid=2844 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:06.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:06.268000 audit[6563]: USER_END pid=6563 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:06.268000 audit[6563]: CRED_DISP pid=6563 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:06.267854 sshd[6563]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:06.270709 systemd[1]: sshd@19-10.200.8.35:22-10.200.12.6:51778.service: Deactivated successfully. Feb 12 19:48:06.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.35:22-10.200.12.6:51778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:06.272122 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:48:06.272811 systemd-logind[1403]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:48:06.273949 systemd-logind[1403]: Removed session 22. Feb 12 19:48:06.373764 systemd[1]: Started sshd@20-10.200.8.35:22-10.200.12.6:51786.service. Feb 12 19:48:06.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.35:22-10.200.12.6:51786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:06.996000 audit[6630]: USER_ACCT pid=6630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:06.997561 sshd[6630]: Accepted publickey for core from 10.200.12.6 port 51786 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:07.002904 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 12 19:48:07.002993 kernel: audit: type=1101 audit(1707767286.996:468): pid=6630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.002676 sshd[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:07.007672 systemd-logind[1403]: New session 23 of user core. Feb 12 19:48:07.009461 systemd[1]: Started session-23.scope. Feb 12 19:48:07.001000 audit[6630]: CRED_ACQ pid=6630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.038358 kernel: audit: type=1103 audit(1707767287.001:469): pid=6630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.038444 kernel: audit: type=1006 audit(1707767287.002:470): pid=6630 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 12 19:48:07.002000 audit[6630]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6f8295e0 a2=3 a3=0 items=0 ppid=1 pid=6630 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:07.049406 kernel: audit: type=1300 audit(1707767287.002:470): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6f8295e0 a2=3 a3=0 items=0 ppid=1 pid=6630 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:07.002000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:07.020000 audit[6630]: USER_START pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.093653 kernel: audit: type=1327 audit(1707767287.002:470): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:07.093768 kernel: audit: type=1105 audit(1707767287.020:471): pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.093804 kernel: audit: type=1103 audit(1707767287.022:472): pid=6633 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.022000 audit[6633]: CRED_ACQ pid=6633 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.655956 sshd[6630]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:07.656000 audit[6630]: USER_END pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.658693 systemd[1]: sshd@20-10.200.8.35:22-10.200.12.6:51786.service: Deactivated successfully. Feb 12 19:48:07.659563 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:48:07.665536 systemd-logind[1403]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:48:07.666586 systemd-logind[1403]: Removed session 23. Feb 12 19:48:07.656000 audit[6630]: CRED_DISP pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.693812 kernel: audit: type=1106 audit(1707767287.656:473): pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.693892 kernel: audit: type=1104 audit(1707767287.656:474): pid=6630 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:07.693916 kernel: audit: type=1131 audit(1707767287.658:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.35:22-10.200.12.6:51786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:07.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.35:22-10.200.12.6:51786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:07.774139 systemd[1]: Started sshd@21-10.200.8.35:22-10.200.12.6:48644.service. Feb 12 19:48:07.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.35:22-10.200.12.6:48644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:08.394000 audit[6643]: USER_ACCT pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.394803 sshd[6643]: Accepted publickey for core from 10.200.12.6 port 48644 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:08.395000 audit[6643]: CRED_ACQ pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.395000 audit[6643]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef1ab5870 a2=3 a3=0 items=0 ppid=1 pid=6643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:08.395000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:08.396299 sshd[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:08.401437 systemd[1]: Started session-24.scope. Feb 12 19:48:08.401693 systemd-logind[1403]: New session 24 of user core. Feb 12 19:48:08.411000 audit[6643]: USER_START pid=6643 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.412000 audit[6646]: CRED_ACQ pid=6646 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.897642 sshd[6643]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:08.898000 audit[6643]: USER_END pid=6643 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.899000 audit[6643]: CRED_DISP pid=6643 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:08.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.35:22-10.200.12.6:48644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:08.901004 systemd[1]: sshd@21-10.200.8.35:22-10.200.12.6:48644.service: Deactivated successfully. Feb 12 19:48:08.903705 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:48:08.904607 systemd-logind[1403]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:48:08.906993 systemd-logind[1403]: Removed session 24. Feb 12 19:48:14.001184 systemd[1]: Started sshd@22-10.200.8.35:22-10.200.12.6:48656.service. Feb 12 19:48:14.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.35:22-10.200.12.6:48656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:14.026037 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 12 19:48:14.026150 kernel: audit: type=1130 audit(1707767294.001:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.35:22-10.200.12.6:48656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:14.558000 audit[6704]: NETFILTER_CFG table=filter:152 family=2 entries=18 op=nft_register_rule pid=6704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:14.558000 audit[6704]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffec242900 a2=0 a3=7fffec2428ec items=0 ppid=2844 pid=6704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:14.590157 kernel: audit: type=1325 audit(1707767294.558:486): table=filter:152 family=2 entries=18 op=nft_register_rule pid=6704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:14.590241 kernel: audit: type=1300 audit(1707767294.558:486): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffec242900 a2=0 a3=7fffec2428ec items=0 ppid=2844 pid=6704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:14.590273 kernel: audit: type=1327 audit(1707767294.558:486): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:14.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:14.558000 audit[6704]: NETFILTER_CFG table=nat:153 family=2 entries=178 op=nft_register_chain pid=6704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:14.617740 kernel: audit: type=1325 audit(1707767294.558:487): table=nat:153 family=2 entries=178 op=nft_register_chain pid=6704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:48:14.617812 kernel: audit: type=1300 audit(1707767294.558:487): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fffec242900 a2=0 a3=7fffec2428ec items=0 ppid=2844 pid=6704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:14.558000 audit[6704]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fffec242900 a2=0 a3=7fffec2428ec items=0 ppid=2844 pid=6704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:14.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:14.637070 sshd[6675]: Accepted publickey for core from 10.200.12.6 port 48656 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:14.632000 audit[6675]: USER_ACCT pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:14.645454 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:14.650743 systemd[1]: Started session-25.scope. Feb 12 19:48:14.652190 systemd-logind[1403]: New session 25 of user core. Feb 12 19:48:14.664405 kernel: audit: type=1327 audit(1707767294.558:487): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:48:14.664466 kernel: audit: type=1101 audit(1707767294.632:488): pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:14.664526 kernel: audit: type=1103 audit(1707767294.644:489): pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:14.644000 audit[6675]: CRED_ACQ pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:14.691740 kernel: audit: type=1006 audit(1707767294.644:490): pid=6675 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 12 19:48:14.644000 audit[6675]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe85902c0 a2=3 a3=0 items=0 ppid=1 pid=6675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:14.644000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:14.655000 audit[6675]: USER_START pid=6675 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:14.663000 audit[6707]: CRED_ACQ pid=6707 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:15.129048 sshd[6675]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:15.130000 audit[6675]: USER_END pid=6675 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:15.130000 audit[6675]: CRED_DISP pid=6675 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:15.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.35:22-10.200.12.6:48656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:15.132691 systemd[1]: sshd@22-10.200.8.35:22-10.200.12.6:48656.service: Deactivated successfully. Feb 12 19:48:15.134981 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:48:15.135731 systemd-logind[1403]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:48:15.136677 systemd-logind[1403]: Removed session 25. Feb 12 19:48:20.256818 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 12 19:48:20.257022 kernel: audit: type=1130 audit(1707767300.232:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.35:22-10.200.12.6:49620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:20.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.35:22-10.200.12.6:49620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:20.232548 systemd[1]: Started sshd@23-10.200.8.35:22-10.200.12.6:49620.service. Feb 12 19:48:20.862000 audit[6737]: USER_ACCT pid=6737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.882261 sshd[6737]: Accepted publickey for core from 10.200.12.6 port 49620 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:20.882687 kernel: audit: type=1101 audit(1707767300.862:497): pid=6737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.882795 sshd[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:20.881000 audit[6737]: CRED_ACQ pid=6737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.893155 systemd-logind[1403]: New session 26 of user core. Feb 12 19:48:20.893876 systemd[1]: Started session-26.scope. Feb 12 19:48:20.918813 kernel: audit: type=1103 audit(1707767300.881:498): pid=6737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.918925 kernel: audit: type=1006 audit(1707767300.882:499): pid=6737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 12 19:48:20.918953 kernel: audit: type=1300 audit(1707767300.882:499): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca4c25f50 a2=3 a3=0 items=0 ppid=1 pid=6737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:20.882000 audit[6737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca4c25f50 a2=3 a3=0 items=0 ppid=1 pid=6737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:20.882000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:20.941821 kernel: audit: type=1327 audit(1707767300.882:499): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:20.941904 kernel: audit: type=1105 audit(1707767300.895:500): pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.895000 audit[6737]: USER_START pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.900000 audit[6739]: CRED_ACQ pid=6739 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:20.961466 kernel: audit: type=1103 audit(1707767300.900:501): pid=6739 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:21.355958 sshd[6737]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:21.357000 audit[6737]: USER_END pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:21.359677 systemd[1]: sshd@23-10.200.8.35:22-10.200.12.6:49620.service: Deactivated successfully. Feb 12 19:48:21.360718 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:48:21.362651 systemd-logind[1403]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:48:21.363677 systemd-logind[1403]: Removed session 26. Feb 12 19:48:21.357000 audit[6737]: CRED_DISP pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:21.393042 kernel: audit: type=1106 audit(1707767301.357:502): pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:21.393152 kernel: audit: type=1104 audit(1707767301.357:503): pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:21.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.35:22-10.200.12.6:49620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:26.462256 systemd[1]: Started sshd@24-10.200.8.35:22-10.200.12.6:49622.service. Feb 12 19:48:26.485956 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:48:26.486013 kernel: audit: type=1130 audit(1707767306.462:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.35:22-10.200.12.6:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:26.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.35:22-10.200.12.6:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:27.117000 audit[6763]: USER_ACCT pid=6763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.118380 sshd[6763]: Accepted publickey for core from 10.200.12.6 port 49622 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:27.137364 kernel: audit: type=1101 audit(1707767307.117:506): pid=6763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.137852 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:27.136000 audit[6763]: CRED_ACQ pid=6763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.147867 systemd[1]: Started session-27.scope. Feb 12 19:48:27.148969 systemd-logind[1403]: New session 27 of user core. Feb 12 19:48:27.167330 kernel: audit: type=1103 audit(1707767307.136:507): pid=6763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.167408 kernel: audit: type=1006 audit(1707767307.137:508): pid=6763 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 12 19:48:27.167445 kernel: audit: type=1300 audit(1707767307.137:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefcbd46f0 a2=3 a3=0 items=0 ppid=1 pid=6763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:27.137000 audit[6763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefcbd46f0 a2=3 a3=0 items=0 ppid=1 pid=6763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:27.187105 kernel: audit: type=1327 audit(1707767307.137:508): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:27.137000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:27.153000 audit[6763]: USER_START pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.191359 kernel: audit: type=1105 audit(1707767307.153:509): pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.156000 audit[6766]: CRED_ACQ pid=6766 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.209426 kernel: audit: type=1103 audit(1707767307.156:510): pid=6766 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.625881 sshd[6763]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:27.627000 audit[6763]: USER_END pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.629488 systemd-logind[1403]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:48:27.630664 systemd[1]: sshd@24-10.200.8.35:22-10.200.12.6:49622.service: Deactivated successfully. Feb 12 19:48:27.631519 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:48:27.632788 systemd-logind[1403]: Removed session 27. Feb 12 19:48:27.646367 kernel: audit: type=1106 audit(1707767307.627:511): pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.646471 kernel: audit: type=1104 audit(1707767307.627:512): pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.627000 audit[6763]: CRED_DISP pid=6763 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:27.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.35:22-10.200.12.6:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:29.538488 systemd[1]: run-containerd-runc-k8s.io-87c946a4cc1a2151320cf69b886df9b3cfffe9b63d6f4b587a1ddd99dc4f96f0-runc.CmhRJb.mount: Deactivated successfully. Feb 12 19:48:32.021101 systemd[1]: run-containerd-runc-k8s.io-e383cf017f17224e4b5d613add850a92a8d978b6564010e7b50ba1173c334e35-runc.L49tB3.mount: Deactivated successfully. Feb 12 19:48:32.057876 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.YcJDw0.mount: Deactivated successfully. Feb 12 19:48:32.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.35:22-10.200.12.6:55052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:32.730956 systemd[1]: Started sshd@25-10.200.8.35:22-10.200.12.6:55052.service. Feb 12 19:48:32.736433 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:48:32.736490 kernel: audit: type=1130 audit(1707767312.730:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.35:22-10.200.12.6:55052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:33.349000 audit[6836]: USER_ACCT pid=6836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.350035 sshd[6836]: Accepted publickey for core from 10.200.12.6 port 55052 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:33.372883 kernel: audit: type=1101 audit(1707767313.349:515): pid=6836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.373070 sshd[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:33.372000 audit[6836]: CRED_ACQ pid=6836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.378176 systemd[1]: Started session-28.scope. Feb 12 19:48:33.379330 systemd-logind[1403]: New session 28 of user core. Feb 12 19:48:33.401740 kernel: audit: type=1103 audit(1707767313.372:516): pid=6836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.401817 kernel: audit: type=1006 audit(1707767313.372:517): pid=6836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 12 19:48:33.401847 kernel: audit: type=1300 audit(1707767313.372:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce47633f0 a2=3 a3=0 items=0 ppid=1 pid=6836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:33.372000 audit[6836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce47633f0 a2=3 a3=0 items=0 ppid=1 pid=6836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:33.372000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:33.424912 kernel: audit: type=1327 audit(1707767313.372:517): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:33.424973 kernel: audit: type=1105 audit(1707767313.389:518): pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.389000 audit[6836]: USER_START pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.391000 audit[6839]: CRED_ACQ pid=6839 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.459364 kernel: audit: type=1103 audit(1707767313.391:519): pid=6839 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.848838 sshd[6836]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:33.850000 audit[6836]: USER_END pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.852606 systemd-logind[1403]: Session 28 logged out. Waiting for processes to exit. Feb 12 19:48:33.854179 systemd[1]: sshd@25-10.200.8.35:22-10.200.12.6:55052.service: Deactivated successfully. Feb 12 19:48:33.855639 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 19:48:33.857085 systemd-logind[1403]: Removed session 28. Feb 12 19:48:33.850000 audit[6836]: CRED_DISP pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.886880 kernel: audit: type=1106 audit(1707767313.850:520): pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.886980 kernel: audit: type=1104 audit(1707767313.850:521): pid=6836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:33.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.35:22-10.200.12.6:55052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:38.951842 systemd[1]: Started sshd@26-10.200.8.35:22-10.200.12.6:48556.service. Feb 12 19:48:38.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.35:22-10.200.12.6:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:38.958285 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:48:38.958386 kernel: audit: type=1130 audit(1707767318.951:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.35:22-10.200.12.6:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:39.569000 audit[6852]: USER_ACCT pid=6852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.571203 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:39.591454 kernel: audit: type=1101 audit(1707767319.569:524): pid=6852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.591511 sshd[6852]: Accepted publickey for core from 10.200.12.6 port 48556 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:39.570000 audit[6852]: CRED_ACQ pid=6852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.600869 systemd[1]: Started session-29.scope. Feb 12 19:48:39.601383 systemd-logind[1403]: New session 29 of user core. Feb 12 19:48:39.613619 kernel: audit: type=1103 audit(1707767319.570:525): pid=6852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.613723 kernel: audit: type=1006 audit(1707767319.570:526): pid=6852 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 12 19:48:39.625529 kernel: audit: type=1300 audit(1707767319.570:526): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9d9d3050 a2=3 a3=0 items=0 ppid=1 pid=6852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:39.570000 audit[6852]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9d9d3050 a2=3 a3=0 items=0 ppid=1 pid=6852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:39.570000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:39.650884 kernel: audit: type=1327 audit(1707767319.570:526): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:39.650983 kernel: audit: type=1105 audit(1707767319.616:527): pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.616000 audit[6852]: USER_START pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.619000 audit[6855]: CRED_ACQ pid=6855 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:39.687367 kernel: audit: type=1103 audit(1707767319.619:528): pid=6855 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:40.063379 sshd[6852]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:40.065000 audit[6852]: USER_END pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:40.068574 systemd-logind[1403]: Session 29 logged out. Waiting for processes to exit. Feb 12 19:48:40.069906 systemd[1]: sshd@26-10.200.8.35:22-10.200.12.6:48556.service: Deactivated successfully. Feb 12 19:48:40.070753 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 19:48:40.072257 systemd-logind[1403]: Removed session 29. Feb 12 19:48:40.065000 audit[6852]: CRED_DISP pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:40.100620 kernel: audit: type=1106 audit(1707767320.065:529): pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:40.100696 kernel: audit: type=1104 audit(1707767320.065:530): pid=6852 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:40.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.35:22-10.200.12.6:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:45.186718 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:48:45.186851 kernel: audit: type=1130 audit(1707767325.163:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.35:22-10.200.12.6:48558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:45.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.35:22-10.200.12.6:48558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:48:45.163726 systemd[1]: Started sshd@27-10.200.8.35:22-10.200.12.6:48558.service. Feb 12 19:48:45.775000 audit[6865]: USER_ACCT pid=6865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.794208 sshd[6865]: Accepted publickey for core from 10.200.12.6 port 48558 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:45.794623 kernel: audit: type=1101 audit(1707767325.775:533): pid=6865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.794775 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:45.793000 audit[6865]: CRED_ACQ pid=6865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.800908 systemd[1]: Started session-30.scope. Feb 12 19:48:45.801379 systemd-logind[1403]: New session 30 of user core. Feb 12 19:48:45.823873 kernel: audit: type=1103 audit(1707767325.793:534): pid=6865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.823953 kernel: audit: type=1006 audit(1707767325.793:535): pid=6865 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Feb 12 19:48:45.823981 kernel: audit: type=1300 audit(1707767325.793:535): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3601d8e0 a2=3 a3=0 items=0 ppid=1 pid=6865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:45.793000 audit[6865]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3601d8e0 a2=3 a3=0 items=0 ppid=1 pid=6865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:48:45.793000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:45.842394 kernel: audit: type=1327 audit(1707767325.793:535): proctitle=737368643A20636F7265205B707269765D Feb 12 19:48:45.806000 audit[6865]: USER_START pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.807000 audit[6868]: CRED_ACQ pid=6868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.881730 kernel: audit: type=1105 audit(1707767325.806:536): pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:45.881806 kernel: audit: type=1103 audit(1707767325.807:537): pid=6868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:46.302866 sshd[6865]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:46.303000 audit[6865]: USER_END pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:46.306038 systemd[1]: sshd@27-10.200.8.35:22-10.200.12.6:48558.service: Deactivated successfully. Feb 12 19:48:46.306904 systemd[1]: session-30.scope: Deactivated successfully. Feb 12 19:48:46.312945 systemd-logind[1403]: Session 30 logged out. Waiting for processes to exit. Feb 12 19:48:46.313901 systemd-logind[1403]: Removed session 30. Feb 12 19:48:46.303000 audit[6865]: CRED_DISP pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:46.342422 kernel: audit: type=1106 audit(1707767326.303:538): pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:46.342509 kernel: audit: type=1104 audit(1707767326.303:539): pid=6865 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 12 19:48:46.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.35:22-10.200.12.6:48558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:49:02.054036 systemd[1]: run-containerd-runc-k8s.io-069a1723ed5fe0525cdc1f8bc82c33cbac722b615a0bf64d3dfad1d313d65c65-runc.Ym0oEo.mount: Deactivated successfully. Feb 12 19:49:18.131922 systemd[1]: run-containerd-runc-k8s.io-906f305c3857fec175346a44e5e82ac1c9bed2265fcf59a881248ac2f33d9f07-runc.stDI0g.mount: Deactivated successfully. Feb 12 19:49:31.416871 env[1420]: time="2024-02-12T19:49:31.416819309Z" level=info msg="shim disconnected" id=c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f Feb 12 19:49:31.417431 env[1420]: time="2024-02-12T19:49:31.417405212Z" level=warning msg="cleaning up after shim disconnected" id=c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f namespace=k8s.io Feb 12 19:49:31.417547 env[1420]: time="2024-02-12T19:49:31.417524413Z" level=info msg="cleaning up dead shim" Feb 12 19:49:31.418984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f-rootfs.mount: Deactivated successfully. Feb 12 19:49:31.426696 env[1420]: time="2024-02-12T19:49:31.426666562Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7045 runtime=io.containerd.runc.v2\n" Feb 12 19:49:31.874843 kubelet[2648]: I0212 19:49:31.874224 2648 scope.go:115] "RemoveContainer" containerID="c47b04af3b6defe4a8bf9bc7567a65334fc9e1a6534d0b02f6e8654e54b5cb7f" Feb 12 19:49:31.877685 env[1420]: time="2024-02-12T19:49:31.877648019Z" level=info msg="CreateContainer within sandbox \"97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:49:31.908330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623942414.mount: Deactivated successfully. Feb 12 19:49:31.918682 env[1420]: time="2024-02-12T19:49:31.918643242Z" level=info msg="CreateContainer within sandbox \"97008580f7a9778b516fc496f92b52d4454ea6342085d8ae772d069aab412188\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e07b11e18263f60bfce3d12d96b9a94111bc5f6638e1aa40f4b8305955959fc2\"" Feb 12 19:49:31.919030 env[1420]: time="2024-02-12T19:49:31.919001544Z" level=info msg="StartContainer for \"e07b11e18263f60bfce3d12d96b9a94111bc5f6638e1aa40f4b8305955959fc2\"" Feb 12 19:49:32.005505 env[1420]: time="2024-02-12T19:49:32.005453915Z" level=info msg="StartContainer for \"e07b11e18263f60bfce3d12d96b9a94111bc5f6638e1aa40f4b8305955959fc2\" returns successfully" Feb 12 19:49:32.675140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e-rootfs.mount: Deactivated successfully. Feb 12 19:49:32.677914 env[1420]: time="2024-02-12T19:49:32.677870475Z" level=info msg="shim disconnected" id=d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e Feb 12 19:49:32.678400 env[1420]: time="2024-02-12T19:49:32.678380577Z" level=warning msg="cleaning up after shim disconnected" id=d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e namespace=k8s.io Feb 12 19:49:32.678499 env[1420]: time="2024-02-12T19:49:32.678486778Z" level=info msg="cleaning up dead shim" Feb 12 19:49:32.688304 env[1420]: time="2024-02-12T19:49:32.688275331Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7141 runtime=io.containerd.runc.v2\n" Feb 12 19:49:32.880651 kubelet[2648]: I0212 19:49:32.880622 2648 scope.go:115] "RemoveContainer" containerID="d225403f6cef5a414b43ebbcf8f3a0f2ec1dc23310780fac706e3f0d49e8699e" Feb 12 19:49:32.883238 env[1420]: time="2024-02-12T19:49:32.883195692Z" level=info msg="CreateContainer within sandbox \"a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 12 19:49:32.958228 env[1420]: time="2024-02-12T19:49:32.958121500Z" level=info msg="CreateContainer within sandbox \"a1c370efb9695d7c468ae8e92c4150511289127fd0b3b049369317379a12a2c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0e4ae2be7305e72c0dc4522715a413f7b6c97a3e76e6d69ef47841485643291f\"" Feb 12 19:49:32.958823 env[1420]: time="2024-02-12T19:49:32.958797503Z" level=info msg="StartContainer for \"0e4ae2be7305e72c0dc4522715a413f7b6c97a3e76e6d69ef47841485643291f\"" Feb 12 19:49:33.023396 env[1420]: time="2024-02-12T19:49:33.023352455Z" level=info msg="StartContainer for \"0e4ae2be7305e72c0dc4522715a413f7b6c97a3e76e6d69ef47841485643291f\" returns successfully" Feb 12 19:49:38.323219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d-rootfs.mount: Deactivated successfully. Feb 12 19:49:38.325909 env[1420]: time="2024-02-12T19:49:38.325759621Z" level=info msg="shim disconnected" id=2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d Feb 12 19:49:38.325909 env[1420]: time="2024-02-12T19:49:38.325808722Z" level=warning msg="cleaning up after shim disconnected" id=2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d namespace=k8s.io Feb 12 19:49:38.325909 env[1420]: time="2024-02-12T19:49:38.325819922Z" level=info msg="cleaning up dead shim" Feb 12 19:49:38.333739 env[1420]: time="2024-02-12T19:49:38.333712364Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7206 runtime=io.containerd.runc.v2\n" Feb 12 19:49:38.659393 kubelet[2648]: E0212 19:49:38.658508 2648 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.35:50854->10.200.8.28:2379: read: connection timed out Feb 12 19:49:38.896857 kubelet[2648]: I0212 19:49:38.896819 2648 scope.go:115] "RemoveContainer" containerID="2b2c2f2dd3de8b4bb3f1faea1f60db9473238611649711bd24c52d266c836b6d" Feb 12 19:49:38.898988 env[1420]: time="2024-02-12T19:49:38.898946123Z" level=info msg="CreateContainer within sandbox \"a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:49:38.935601 env[1420]: time="2024-02-12T19:49:38.935559221Z" level=info msg="CreateContainer within sandbox \"a9bc2a3e8e552ccfee03411977ec65f093b66da02921e9a266a2112aa9befdbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ecce7382f2678a12f8ef3f7fc23539164aa9e67d47efee89a3e16dff7c3a5767\"" Feb 12 19:49:38.935948 env[1420]: time="2024-02-12T19:49:38.935919623Z" level=info msg="StartContainer for \"ecce7382f2678a12f8ef3f7fc23539164aa9e67d47efee89a3e16dff7c3a5767\"" Feb 12 19:49:39.011262 env[1420]: time="2024-02-12T19:49:39.011210830Z" level=info msg="StartContainer for \"ecce7382f2678a12f8ef3f7fc23539164aa9e67d47efee89a3e16dff7c3a5767\" returns successfully" Feb 12 19:49:39.327985 systemd[1]: run-containerd-runc-k8s.io-ecce7382f2678a12f8ef3f7fc23539164aa9e67d47efee89a3e16dff7c3a5767-runc.gYlElg.mount: Deactivated successfully.