Feb 9 19:00:16.100591 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:16.100622 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:16.100634 kernel: BIOS-provided physical RAM map: Feb 9 19:00:16.100641 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:16.100647 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:16.100652 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:16.100665 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:16.100671 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:16.100680 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:16.100686 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:16.100691 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:16.100699 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:16.100706 kernel: NX (Execute Disable) protection: active Feb 9 19:00:16.100713 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:16.100726 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:16.100732 kernel: random: crng init done Feb 9 19:00:16.100740 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:16.100749 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:16.100757 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:16.100764 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:16.100771 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:16.100777 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:16.100788 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:16.100795 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:16.100804 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:16.100811 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:16.100818 kernel: tsc: Detected 2593.906 MHz processor Feb 9 19:00:16.100828 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:16.100835 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:16.100845 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:16.100852 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:16.100858 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:16.100871 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:16.100877 kernel: Using GB pages for direct mapping Feb 9 19:00:16.100886 kernel: Secure boot disabled Feb 9 19:00:16.100893 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:16.100899 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:16.100907 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100915 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100922 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:16.100937 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:16.100944 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100954 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100961 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100971 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100978 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100989 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.100998 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:16.101005 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:16.101015 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:16.101022 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:16.101029 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:16.101038 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:16.101046 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:16.101058 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:16.101065 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:16.101073 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:16.101082 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:16.101090 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:16.101099 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:16.101105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:16.101113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:16.101122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:16.101135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:16.101142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:16.101149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:16.101158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:16.101166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:16.101175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:16.101183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:16.101190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:16.101198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:16.101209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:16.101220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:16.101227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:16.101234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:16.101244 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:16.101251 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:16.101262 kernel: Zone ranges: Feb 9 19:00:16.101269 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:16.101276 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:16.101289 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:16.101296 kernel: Movable zone start for each node Feb 9 19:00:16.101306 kernel: Early memory node ranges Feb 9 19:00:16.101312 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:16.101320 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:16.101330 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:16.101337 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:16.101346 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:16.101353 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:16.101364 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:16.101373 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:16.101381 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:16.101390 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:16.101397 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:16.101405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:16.101414 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:16.101421 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:16.101431 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:16.101441 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:16.101450 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:16.101458 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:16.101466 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:16.101475 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:16.101483 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:16.101494 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:16.101506 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:16.101521 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:16.101539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:16.101554 kernel: Policy zone: Normal Feb 9 19:00:16.101578 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:16.101595 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:16.101608 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:16.101623 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:16.101637 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:16.101653 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:00:16.101673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:16.101689 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:16.101716 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:16.101737 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:16.101753 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:16.101768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:16.101783 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:16.101797 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:16.101813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:16.101828 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:16.101842 kernel: Using NULL legacy PIC Feb 9 19:00:16.101860 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:16.101874 kernel: Console: colour dummy device 80x25 Feb 9 19:00:16.101889 kernel: printk: console [tty1] enabled Feb 9 19:00:16.101903 kernel: printk: console [ttyS0] enabled Feb 9 19:00:16.101917 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:16.101934 kernel: ACPI: Core revision 20210730 Feb 9 19:00:16.101948 kernel: Failed to register legacy timer interrupt Feb 9 19:00:16.101963 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:16.101979 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:16.101993 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 9 19:00:16.102008 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:16.102024 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:16.102039 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:16.102054 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:16.102068 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:16.102089 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:16.102104 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:16.102120 kernel: RETBleed: Vulnerable Feb 9 19:00:16.102133 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:16.102148 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:16.102162 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:16.102176 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:16.102189 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:16.102203 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:16.102217 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:16.102236 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:16.102250 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:16.102265 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:16.102280 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:16.102296 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:16.102311 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:16.102325 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:16.102341 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:16.102356 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:16.102367 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:16.102378 kernel: LSM: Security Framework initializing Feb 9 19:00:16.102389 kernel: SELinux: Initializing. Feb 9 19:00:16.108385 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:16.108412 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:16.108428 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:16.108443 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:16.108457 kernel: signal: max sigframe size: 3632 Feb 9 19:00:16.108472 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:16.108486 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:16.108500 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:16.108514 kernel: x86: Booting SMP configuration: Feb 9 19:00:16.108528 kernel: .... node #0, CPUs: #1 Feb 9 19:00:16.108551 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:16.108564 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:16.108599 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:16.108612 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:16.108627 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:16.108641 kernel: devtmpfs: initialized Feb 9 19:00:16.108656 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:16.108670 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:16.108689 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:16.108703 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:16.108717 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:16.108731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:16.108746 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:16.108760 kernel: audit: type=2000 audit(1707505214.024:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:16.108774 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:16.108789 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:16.108803 kernel: cpuidle: using governor menu Feb 9 19:00:16.108820 kernel: ACPI: bus type PCI registered Feb 9 19:00:16.108835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:16.108849 kernel: dca service started, version 1.12.1 Feb 9 19:00:16.108863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:16.108877 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:16.108891 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:16.108906 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:16.108920 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:16.108933 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:16.108950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:16.108964 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:16.108978 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:16.108993 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:16.109007 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:16.109020 kernel: ACPI: Interpreter enabled Feb 9 19:00:16.109035 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:16.109049 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:16.109064 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:16.109080 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:16.109095 kernel: iommu: Default domain type: Translated Feb 9 19:00:16.109109 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:16.109123 kernel: vgaarb: loaded Feb 9 19:00:16.109137 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:16.109151 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:16.109165 kernel: PTP clock support registered Feb 9 19:00:16.109179 kernel: Registered efivars operations Feb 9 19:00:16.109193 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:16.109207 kernel: PCI: System does not support PCI Feb 9 19:00:16.109224 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:16.109238 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:16.109252 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:16.109266 kernel: pnp: PnP ACPI init Feb 9 19:00:16.109281 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:16.109295 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:16.109309 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:16.109323 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:16.109340 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:16.109354 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:16.109368 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:16.109382 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:16.109396 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:16.109410 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:16.109425 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:16.109439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:16.109453 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:16.109470 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:16.109484 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:16.109499 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:16.109513 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:16.109527 kernel: Initialise system trusted keyrings Feb 9 19:00:16.109540 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:16.109554 kernel: Key type asymmetric registered Feb 9 19:00:16.109575 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:16.109586 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:16.109608 kernel: io scheduler mq-deadline registered Feb 9 19:00:16.109620 kernel: io scheduler kyber registered Feb 9 19:00:16.109630 kernel: io scheduler bfq registered Feb 9 19:00:16.109640 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:16.109650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:16.109662 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:16.109675 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:16.109689 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:16.109877 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:16.109958 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:15 UTC (1707505215) Feb 9 19:00:16.110039 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:16.110050 kernel: fail to initialize ptp_kvm Feb 9 19:00:16.110061 kernel: intel_pstate: CPU model not supported Feb 9 19:00:16.110069 kernel: efifb: probing for efifb Feb 9 19:00:16.110077 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:16.110088 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:16.110097 kernel: efifb: scrolling: redraw Feb 9 19:00:16.110110 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:16.110118 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:16.110128 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:16.110137 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:16.110146 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:16.110154 kernel: Segment Routing with IPv6 Feb 9 19:00:16.110164 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:16.110173 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:16.110184 kernel: Key type dns_resolver registered Feb 9 19:00:16.110194 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:16.110204 kernel: sched_clock: Marking stable (824285400, 23577500)->(1045410100, -197547200) Feb 9 19:00:16.110213 kernel: registered taskstats version 1 Feb 9 19:00:16.110224 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:16.110232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:16.110239 kernel: Key type .fscrypt registered Feb 9 19:00:16.110247 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:16.110258 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:16.110270 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:16.110280 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:16.110287 kernel: ima: No architecture policies found Feb 9 19:00:16.110297 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:16.110306 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:16.110316 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:16.110324 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:16.110332 kernel: Run /init as init process Feb 9 19:00:16.110342 kernel: with arguments: Feb 9 19:00:16.110351 kernel: /init Feb 9 19:00:16.110364 kernel: with environment: Feb 9 19:00:16.110372 kernel: HOME=/ Feb 9 19:00:16.110380 kernel: TERM=linux Feb 9 19:00:16.110389 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:16.110403 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:16.110414 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:16.110424 systemd[1]: Detected architecture x86-64. Feb 9 19:00:16.110435 systemd[1]: Running in initrd. Feb 9 19:00:16.110447 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:16.110454 systemd[1]: Hostname set to . Feb 9 19:00:16.110464 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:16.110474 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:16.110485 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:16.110494 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:16.110502 systemd[1]: Reached target paths.target. Feb 9 19:00:16.110512 systemd[1]: Reached target slices.target. Feb 9 19:00:16.110526 systemd[1]: Reached target swap.target. Feb 9 19:00:16.110534 systemd[1]: Reached target timers.target. Feb 9 19:00:16.110544 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:16.110554 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:16.110564 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:16.120977 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:16.120990 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:16.121008 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:16.121019 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:16.121031 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:16.121042 systemd[1]: Reached target sockets.target. Feb 9 19:00:16.121053 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:16.121064 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:16.121075 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:16.121087 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:16.121098 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:16.121113 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:16.121126 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:16.121139 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:16.121153 kernel: audit: type=1130 audit(1707505216.109:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.121175 systemd-journald[183]: Journal started Feb 9 19:00:16.121260 systemd-journald[183]: Runtime Journal (/run/log/journal/0879cc4fcecd43dab4bf9a4f093076d1) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:16.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.121791 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:16.127282 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:16.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.152261 systemd[1]: Started systemd-journald.service. Feb 9 19:00:16.152361 kernel: audit: type=1130 audit(1707505216.135:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.170995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:16.171068 kernel: Bridge firewalling registered Feb 9 19:00:16.165656 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:16.174125 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:16.180816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:16.183101 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:16.202590 kernel: audit: type=1130 audit(1707505216.164:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.221002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:16.228006 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:16.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.241939 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:16.243207 kernel: audit: type=1130 audit(1707505216.170:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.243286 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:16.246526 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:16.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.263813 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:16.288084 kernel: audit: type=1130 audit(1707505216.226:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.288113 kernel: SCSI subsystem initialized Feb 9 19:00:16.291665 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:16.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.294312 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:16.311228 kernel: audit: type=1130 audit(1707505216.290:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.310222 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:16.339347 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:16.339445 kernel: audit: type=1130 audit(1707505216.293:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.339543 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:00:16.339543 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:16.364231 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:16.364291 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:16.368458 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:16.370814 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:16.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.378294 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:16.395927 kernel: audit: type=1130 audit(1707505216.376:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.403797 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:16.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.420594 kernel: audit: type=1130 audit(1707505216.406:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.458595 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:16.472593 kernel: iscsi: registered transport (tcp) Feb 9 19:00:16.498472 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:16.498582 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:16.529719 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:16.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.535798 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:16.588603 kernel: raid6: avx512x4 gen() 27688 MB/s Feb 9 19:00:16.608584 kernel: raid6: avx512x4 xor() 6172 MB/s Feb 9 19:00:16.629584 kernel: raid6: avx512x2 gen() 27946 MB/s Feb 9 19:00:16.649581 kernel: raid6: avx512x2 xor() 29386 MB/s Feb 9 19:00:16.669581 kernel: raid6: avx512x1 gen() 28268 MB/s Feb 9 19:00:16.690586 kernel: raid6: avx512x1 xor() 26645 MB/s Feb 9 19:00:16.710578 kernel: raid6: avx2x4 gen() 25035 MB/s Feb 9 19:00:16.730579 kernel: raid6: avx2x4 xor() 6045 MB/s Feb 9 19:00:16.751582 kernel: raid6: avx2x2 gen() 25895 MB/s Feb 9 19:00:16.771580 kernel: raid6: avx2x2 xor() 22201 MB/s Feb 9 19:00:16.791578 kernel: raid6: avx2x1 gen() 23198 MB/s Feb 9 19:00:16.812579 kernel: raid6: avx2x1 xor() 19282 MB/s Feb 9 19:00:16.832577 kernel: raid6: sse2x4 gen() 10558 MB/s Feb 9 19:00:16.852578 kernel: raid6: sse2x4 xor() 6529 MB/s Feb 9 19:00:16.873578 kernel: raid6: sse2x2 gen() 11195 MB/s Feb 9 19:00:16.893578 kernel: raid6: sse2x2 xor() 7420 MB/s Feb 9 19:00:16.913577 kernel: raid6: sse2x1 gen() 10498 MB/s Feb 9 19:00:16.938321 kernel: raid6: sse2x1 xor() 5919 MB/s Feb 9 19:00:16.938356 kernel: raid6: using algorithm avx512x1 gen() 28268 MB/s Feb 9 19:00:16.938367 kernel: raid6: .... xor() 26645 MB/s, rmw enabled Feb 9 19:00:16.941735 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:16.961601 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:17.060604 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:17.070111 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.074000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:17.074000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:17.075160 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:17.092055 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 19:00:17.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.099123 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:17.105339 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:17.123948 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Feb 9 19:00:17.157859 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:17.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.160116 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:17.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.199585 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:17.260596 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:17.276598 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:17.293596 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:17.299592 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:17.304613 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:17.311775 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:17.327888 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:17.327958 kernel: scsi host1: storvsc_host_t Feb 9 19:00:17.328004 kernel: scsi host0: storvsc_host_t Feb 9 19:00:17.336825 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:17.345598 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:17.351600 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:17.372623 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:17.378895 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:17.378963 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:17.389943 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:17.412453 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:17.412856 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:17.419860 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:17.420111 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:17.420246 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:17.427228 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:17.427425 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:17.427552 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:17.437590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:17.443001 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:17.530833 kernel: hv_netvsc 000d3adf-7f9e-000d-3adf-7f9e000d3adf eth0: VF slot 1 added Feb 9 19:00:17.540588 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:17.547591 kernel: hv_pci cd7c0213-e444-458e-9c3e-b08cd0a6d4fb: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:17.560226 kernel: hv_pci cd7c0213-e444-458e-9c3e-b08cd0a6d4fb: PCI host bridge to bus e444:00 Feb 9 19:00:17.560471 kernel: pci_bus e444:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:17.560626 kernel: pci_bus e444:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:17.570910 kernel: pci e444:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:17.581030 kernel: pci e444:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:17.599583 kernel: pci e444:00:02.0: enabling Extended Tags Feb 9 19:00:17.614607 kernel: pci e444:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e444:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:17.623876 kernel: pci_bus e444:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:17.624113 kernel: pci e444:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:17.719601 kernel: mlx5_core e444:00:02.0: firmware version: 14.30.1224 Feb 9 19:00:17.840928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:17.891596 kernel: mlx5_core e444:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:17.900595 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (448) Feb 9 19:00:17.916509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:18.032139 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:18.052365 kernel: mlx5_core e444:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:18.052681 kernel: mlx5_core e444:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 9 19:00:18.065506 kernel: hv_netvsc 000d3adf-7f9e-000d-3adf-7f9e000d3adf eth0: VF registering: eth1 Feb 9 19:00:18.065793 kernel: mlx5_core e444:00:02.0 eth1: joined to eth0 Feb 9 19:00:18.077741 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:18.084629 kernel: mlx5_core e444:00:02.0 enP58436s1: renamed from eth1 Feb 9 19:00:18.087695 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:18.101793 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:18.118593 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:18.127594 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:19.136605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:19.137809 disk-uuid[560]: The operation has completed successfully. Feb 9 19:00:19.215285 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:19.215418 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:19.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.227880 systemd[1]: Starting verity-setup.service... Feb 9 19:00:19.263631 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:19.545495 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:19.552048 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:19.556822 systemd[1]: Finished verity-setup.service. Feb 9 19:00:19.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.635511 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:19.639641 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:19.639762 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:19.644560 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:19.650336 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:19.672989 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:19.673093 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:19.673124 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:19.722854 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:19.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.727000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:19.729281 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:19.754884 systemd-networkd[799]: lo: Link UP Feb 9 19:00:19.754896 systemd-networkd[799]: lo: Gained carrier Feb 9 19:00:19.759178 systemd-networkd[799]: Enumeration completed Feb 9 19:00:19.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.759314 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:19.763710 systemd[1]: Reached target network.target. Feb 9 19:00:19.763963 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:19.769995 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:19.784372 systemd[1]: Started iscsiuio.service. Feb 9 19:00:19.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.789830 systemd[1]: Starting iscsid.service... Feb 9 19:00:19.794525 iscsid[809]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:19.794525 iscsid[809]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:19.794525 iscsid[809]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:19.794525 iscsid[809]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:19.794525 iscsid[809]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:19.794525 iscsid[809]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:19.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.798661 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:19.798999 systemd[1]: Started iscsid.service. Feb 9 19:00:19.815694 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:19.829832 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:19.834411 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:19.838989 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:19.841311 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:19.844518 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:19.867196 kernel: mlx5_core e444:00:02.0 enP58436s1: Link up Feb 9 19:00:19.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.857074 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:19.907349 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:19.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.913262 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:19.939620 kernel: hv_netvsc 000d3adf-7f9e-000d-3adf-7f9e000d3adf eth0: Data path switched to VF: enP58436s1 Feb 9 19:00:19.944602 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:19.944907 systemd-networkd[799]: enP58436s1: Link UP Feb 9 19:00:19.945047 systemd-networkd[799]: eth0: Link UP Feb 9 19:00:19.945267 systemd-networkd[799]: eth0: Gained carrier Feb 9 19:00:19.950226 systemd-networkd[799]: enP58436s1: Gained carrier Feb 9 19:00:19.990667 systemd-networkd[799]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:21.923855 systemd-networkd[799]: eth0: Gained IPv6LL Feb 9 19:00:23.195119 ignition[826]: Ignition 2.14.0 Feb 9 19:00:23.195138 ignition[826]: Stage: fetch-offline Feb 9 19:00:23.195241 ignition[826]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.195303 ignition[826]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.294999 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.295260 ignition[826]: parsed url from cmdline: "" Feb 9 19:00:23.295267 ignition[826]: no config URL provided Feb 9 19:00:23.295275 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:23.303650 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:23.334547 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:23.334589 kernel: audit: type=1130 audit(1707505223.310:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.295286 ignition[826]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:23.312551 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:23.295294 ignition[826]: failed to fetch config: resource requires networking Feb 9 19:00:23.295913 ignition[826]: Ignition finished successfully Feb 9 19:00:23.323373 ignition[832]: Ignition 2.14.0 Feb 9 19:00:23.323382 ignition[832]: Stage: fetch Feb 9 19:00:23.323518 ignition[832]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.323547 ignition[832]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.327631 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.329550 ignition[832]: parsed url from cmdline: "" Feb 9 19:00:23.329555 ignition[832]: no config URL provided Feb 9 19:00:23.329562 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:23.329588 ignition[832]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:23.329629 ignition[832]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:23.431825 ignition[832]: GET result: OK Feb 9 19:00:23.432025 ignition[832]: config has been read from IMDS userdata Feb 9 19:00:23.432081 ignition[832]: parsing config with SHA512: 3c224d8592f73d5edcff5480418853189bd42a680e971ae553a5827a3b7f4adfa4d8ee9a8e43a41e23a23ab1455964a0d0675fbadcf98c12450a91c3b8050d87 Feb 9 19:00:23.471237 unknown[832]: fetched base config from "system" Feb 9 19:00:23.471250 unknown[832]: fetched base config from "system" Feb 9 19:00:23.472032 ignition[832]: fetch: fetch complete Feb 9 19:00:23.497147 kernel: audit: type=1130 audit(1707505223.478:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.471258 unknown[832]: fetched user config from "azure" Feb 9 19:00:23.472040 ignition[832]: fetch: fetch passed Feb 9 19:00:23.476122 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:23.472089 ignition[832]: Ignition finished successfully Feb 9 19:00:23.481920 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:23.505014 ignition[838]: Ignition 2.14.0 Feb 9 19:00:23.505021 ignition[838]: Stage: kargs Feb 9 19:00:23.505185 ignition[838]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.505227 ignition[838]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.509530 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.522881 ignition[838]: kargs: kargs passed Feb 9 19:00:23.522962 ignition[838]: Ignition finished successfully Feb 9 19:00:23.527923 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:23.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.533922 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:23.550824 kernel: audit: type=1130 audit(1707505223.532:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.544268 ignition[844]: Ignition 2.14.0 Feb 9 19:00:23.544280 ignition[844]: Stage: disks Feb 9 19:00:23.544444 ignition[844]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.576250 kernel: audit: type=1130 audit(1707505223.556:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.554454 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:23.544481 ignition[844]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.557027 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:23.550316 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.571914 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:23.553293 ignition[844]: disks: disks passed Feb 9 19:00:23.574067 systemd[1]: Reached target local-fs.target. Feb 9 19:00:23.553351 ignition[844]: Ignition finished successfully Feb 9 19:00:23.576182 systemd[1]: Reached target sysinit.target. Feb 9 19:00:23.580126 systemd[1]: Reached target basic.target. Feb 9 19:00:23.599450 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:23.668031 systemd-fsck[852]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:23.696064 kernel: audit: type=1130 audit(1707505223.677:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.674927 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:23.694588 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:23.714596 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:23.715366 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:23.719373 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:23.754715 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:23.760525 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:23.765232 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:23.765282 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:23.776454 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:23.813095 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:23.819548 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:23.831618 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (863) Feb 9 19:00:23.841288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:23.841365 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:23.841377 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:23.846032 initrd-setup-root[868]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:23.853101 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:23.869554 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:23.877344 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:23.884357 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:24.342614 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:24.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.369549 kernel: audit: type=1130 audit(1707505224.344:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.359678 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:24.364197 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:24.369308 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:24.369450 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:24.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.392032 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:24.410249 kernel: audit: type=1130 audit(1707505224.394:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.416033 ignition[931]: INFO : Ignition 2.14.0 Feb 9 19:00:24.418626 ignition[931]: INFO : Stage: mount Feb 9 19:00:24.418626 ignition[931]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.418626 ignition[931]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.430232 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.430232 ignition[931]: INFO : mount: mount passed Feb 9 19:00:24.430232 ignition[931]: INFO : Ignition finished successfully Feb 9 19:00:24.450812 kernel: audit: type=1130 audit(1707505224.435:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.427810 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:25.379718 coreos-metadata[862]: Feb 09 19:00:25.379 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:25.394609 coreos-metadata[862]: Feb 09 19:00:25.394 INFO Fetch successful Feb 9 19:00:25.429708 coreos-metadata[862]: Feb 09 19:00:25.429 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:25.443245 coreos-metadata[862]: Feb 09 19:00:25.443 INFO Fetch successful Feb 9 19:00:25.464606 coreos-metadata[862]: Feb 09 19:00:25.464 INFO wrote hostname ci-3510.3.2-a-00ed68a33d to /sysroot/etc/hostname Feb 9 19:00:25.470745 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:25.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.476983 systemd[1]: Starting ignition-files.service... Feb 9 19:00:25.490263 kernel: audit: type=1130 audit(1707505225.475:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.496892 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:25.511756 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (941) Feb 9 19:00:25.520811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:25.520881 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:25.520892 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:25.529562 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:25.546277 ignition[960]: INFO : Ignition 2.14.0 Feb 9 19:00:25.546277 ignition[960]: INFO : Stage: files Feb 9 19:00:25.550869 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:25.550869 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:25.560696 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:25.569130 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:25.573024 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:25.576781 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:25.644550 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:25.649283 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:25.668748 unknown[960]: wrote ssh authorized keys file for user: core Feb 9 19:00:25.672181 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:25.672181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:25.672181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:26.339374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:26.475244 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:00:26.484851 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:26.484851 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:26.484851 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:27.462049 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:27.590717 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:27.596923 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:00:27.596923 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:00:27.596923 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:27.596923 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:00:28.103197 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:28.721388 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:00:28.729962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:28.729962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:00:28.729962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:00:33.961159 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:01:26.721876 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:01:26.730933 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:26.730933 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:26.730933 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:01:27.473397 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:01:49.658106 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:01:49.666806 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:49.666806 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:01:49.666806 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:01:50.313847 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:02:11.056243 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:11.069697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:11.151590 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (962) Feb 9 19:02:11.093074 systemd[1]: mnt-oem1228909684.mount: Deactivated successfully. Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228909684" Feb 9 19:02:11.157096 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228909684": device or resource busy Feb 9 19:02:11.157096 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1228909684", trying btrfs: device or resource busy Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228909684" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228909684" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1228909684" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1228909684" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2674204983" Feb 9 19:02:11.157096 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2674204983": device or resource busy Feb 9 19:02:11.157096 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2674204983", trying btrfs: device or resource busy Feb 9 19:02:11.157096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2674204983" Feb 9 19:02:11.242968 kernel: audit: type=1130 audit(1707505331.174:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.110275 systemd[1]: mnt-oem2674204983.mount: Deactivated successfully. Feb 9 19:02:11.245544 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2674204983" Feb 9 19:02:11.245544 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2674204983" Feb 9 19:02:11.245544 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2674204983" Feb 9 19:02:11.245544 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1a): [started] processing unit "containerd.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1a): [finished] processing unit "containerd.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 9 19:02:11.245544 ignition[960]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:11.151656 systemd[1]: Finished ignition-files.service. Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(20): [started] processing unit "prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(20): op(21): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(20): op(21): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(20): [finished] processing unit "prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(26): [started] setting preset to enabled for "waagent.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: op(26): [finished] setting preset to enabled for "waagent.service" Feb 9 19:02:11.325919 ignition[960]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:11.325919 ignition[960]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:11.325919 ignition[960]: INFO : files: files passed Feb 9 19:02:11.325919 ignition[960]: INFO : Ignition finished successfully Feb 9 19:02:11.407979 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:02:11.412399 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:02:11.414196 systemd[1]: Starting ignition-quench.service... Feb 9 19:02:11.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.425917 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:02:11.456685 kernel: audit: type=1130 audit(1707505331.430:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.456711 kernel: audit: type=1131 audit(1707505331.430:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.426029 systemd[1]: Finished ignition-quench.service. Feb 9 19:02:11.470513 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:02:11.475202 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:02:11.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.478221 systemd[1]: Reached target ignition-complete.target. Feb 9 19:02:11.497247 kernel: audit: type=1130 audit(1707505331.478:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.498237 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:02:11.514219 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:02:11.514320 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:02:11.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.531797 systemd[1]: Reached target initrd-fs.target. Feb 9 19:02:11.552239 kernel: audit: type=1130 audit(1707505331.519:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.552280 kernel: audit: type=1131 audit(1707505331.519:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.550200 systemd[1]: Reached target initrd.target. Feb 9 19:02:11.552325 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:02:11.553477 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:02:11.570795 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:02:11.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.576803 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:02:11.589580 kernel: audit: type=1130 audit(1707505331.575:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.598287 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:02:11.600755 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:02:11.605422 systemd[1]: Stopped target timers.target. Feb 9 19:02:11.609718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:02:11.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.609889 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:02:11.631473 kernel: audit: type=1131 audit(1707505331.613:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.627285 systemd[1]: Stopped target initrd.target. Feb 9 19:02:11.631624 systemd[1]: Stopped target basic.target. Feb 9 19:02:11.635889 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:02:11.640249 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:02:11.645016 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:02:11.649650 systemd[1]: Stopped target remote-fs.target. Feb 9 19:02:11.654362 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:02:11.658717 systemd[1]: Stopped target sysinit.target. Feb 9 19:02:11.663048 systemd[1]: Stopped target local-fs.target. Feb 9 19:02:11.667357 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:02:11.671692 systemd[1]: Stopped target swap.target. Feb 9 19:02:11.675766 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:02:11.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.675941 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:02:11.698244 kernel: audit: type=1131 audit(1707505331.679:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.693752 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:02:11.698333 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:02:11.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.698517 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:02:11.720601 kernel: audit: type=1131 audit(1707505331.702:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.715963 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:02:11.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.716166 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:02:11.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.720678 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:02:11.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.720843 systemd[1]: Stopped ignition-files.service. Feb 9 19:02:11.725990 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:02:11.726135 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:02:11.732308 systemd[1]: Stopping ignition-mount.service... Feb 9 19:02:11.745295 systemd[1]: Stopping iscsiuio.service... Feb 9 19:02:11.753796 ignition[999]: INFO : Ignition 2.14.0 Feb 9 19:02:11.753796 ignition[999]: INFO : Stage: umount Feb 9 19:02:11.753796 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:11.753796 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:02:11.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.753267 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:02:11.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.778987 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:02:11.759381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:02:11.787351 ignition[999]: INFO : umount: umount passed Feb 9 19:02:11.787351 ignition[999]: INFO : Ignition finished successfully Feb 9 19:02:11.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.762239 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:02:11.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.765097 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:02:11.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.765242 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:02:11.780686 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:02:11.780814 systemd[1]: Stopped iscsiuio.service. Feb 9 19:02:11.787961 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:02:11.788069 systemd[1]: Stopped ignition-mount.service. Feb 9 19:02:11.792906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:02:11.793006 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:02:11.797395 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:02:11.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.797452 systemd[1]: Stopped ignition-disks.service. Feb 9 19:02:11.800798 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:02:11.800837 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:02:11.802922 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:02:11.802970 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:02:11.805082 systemd[1]: Stopped target network.target. Feb 9 19:02:11.807223 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:02:11.807279 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:02:11.812191 systemd[1]: Stopped target paths.target. Feb 9 19:02:11.814134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:02:11.819654 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:02:11.823338 systemd[1]: Stopped target slices.target. Feb 9 19:02:11.825562 systemd[1]: Stopped target sockets.target. Feb 9 19:02:11.830277 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:02:11.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.830323 systemd[1]: Closed iscsid.socket. Feb 9 19:02:11.834138 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:02:11.834191 systemd[1]: Closed iscsiuio.socket. Feb 9 19:02:11.838686 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:02:11.838752 systemd[1]: Stopped ignition-setup.service. Feb 9 19:02:11.842936 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:11.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.846903 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:02:11.851668 systemd-networkd[799]: eth0: DHCPv6 lease lost Feb 9 19:02:11.869553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:02:11.877870 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:02:11.877984 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:02:11.911000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:02:11.911000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:02:11.886822 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:11.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.893058 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:11.895553 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:02:11.895662 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:02:11.907822 systemd[1]: Stopping network-cleanup.service... Feb 9 19:02:11.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.911637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:02:11.911717 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:02:11.916218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:02:11.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.916284 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:02:11.922929 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:02:11.925005 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:02:11.934231 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:02:11.939314 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:02:11.939427 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:02:11.943635 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:02:11.943775 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:02:11.949774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:02:11.949833 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:02:11.952962 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:02:11.953013 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:02:11.960105 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:02:11.960179 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:02:11.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.986223 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:02:11.986310 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:02:11.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.992943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:02:11.995661 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:02:11.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.000250 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:02:12.002903 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:02:12.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.008320 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:02:12.011008 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:02:12.011098 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:02:12.031323 kernel: hv_netvsc 000d3adf-7f9e-000d-3adf-7f9e000d3adf eth0: Data path switched from VF: enP58436s1 Feb 9 19:02:12.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.025179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:02:12.025276 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:02:12.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.031384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:02:12.031470 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:02:12.034297 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:02:12.034411 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:02:12.054011 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:02:12.054166 systemd[1]: Stopped network-cleanup.service. Feb 9 19:02:12.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.059361 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:02:12.064825 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:02:12.077393 systemd[1]: Switching root. Feb 9 19:02:12.081000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:02:12.081000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:02:12.081000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:02:12.081000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:02:12.081000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:02:12.106116 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:02:12.106244 iscsid[809]: iscsid shutting down. Feb 9 19:02:12.108345 systemd-journald[183]: Journal stopped Feb 9 19:02:26.139967 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:26.139996 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:26.140009 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:26.140018 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:26.140025 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:26.140036 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:26.140045 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:26.140059 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:26.140067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:26.140078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:26.140087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:26.140098 systemd[1]: Successfully loaded SELinux policy in 311.325ms. Feb 9 19:02:26.140109 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.967ms. Feb 9 19:02:26.140120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:26.140135 systemd[1]: Detected virtualization microsoft. Feb 9 19:02:26.140147 systemd[1]: Detected architecture x86-64. Feb 9 19:02:26.140156 systemd[1]: Detected first boot. Feb 9 19:02:26.140169 systemd[1]: Hostname set to . Feb 9 19:02:26.140179 systemd[1]: Initializing machine ID from random generator. Feb 9 19:02:26.140194 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:26.140203 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 9 19:02:26.140214 kernel: audit: type=1400 audit(1707505337.449:89): avc: denied { associate } for pid=1050 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:26.140225 kernel: audit: type=1300 audit(1707505337.449:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:26.140238 kernel: audit: type=1327 audit(1707505337.449:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:26.140250 kernel: audit: type=1400 audit(1707505337.457:90): avc: denied { associate } for pid=1050 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:26.140262 kernel: audit: type=1300 audit(1707505337.457:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:26.140271 kernel: audit: type=1307 audit(1707505337.457:90): cwd="/" Feb 9 19:02:26.140283 kernel: audit: type=1302 audit(1707505337.457:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:26.140292 kernel: audit: type=1302 audit(1707505337.457:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:26.140303 kernel: audit: type=1327 audit(1707505337.457:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:26.140316 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:26.140328 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:26.140338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:26.140351 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:26.140361 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:02:26.140373 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:26.140383 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:26.140398 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:26.140409 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:26.140421 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:26.140436 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:26.140447 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:26.140459 systemd[1]: Created slice user.slice. Feb 9 19:02:26.140469 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:26.140479 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:26.140491 systemd[1]: Set up automount boot.automount. Feb 9 19:02:26.140506 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:26.140516 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:26.140526 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:26.140537 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:26.140549 systemd[1]: Reached target slices.target. Feb 9 19:02:26.140560 systemd[1]: Reached target swap.target. Feb 9 19:02:26.140578 systemd[1]: Reached target torcx.target. Feb 9 19:02:26.140588 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:26.140603 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:26.140613 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:26.140626 kernel: audit: type=1400 audit(1707505345.788:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:26.140636 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:02:26.140648 kernel: audit: type=1335 audit(1707505345.788:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:02:26.140658 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:02:26.140671 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:02:26.140680 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:26.140695 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:26.140705 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:26.140718 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:26.140729 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:26.140741 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:26.140754 systemd[1]: Mounting media.mount... Feb 9 19:02:26.140766 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:26.140776 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:26.140788 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:26.140798 systemd[1]: Mounting tmp.mount... Feb 9 19:02:26.140810 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:26.140821 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:26.140832 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:26.140842 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:26.140860 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:26.140872 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:26.140882 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:26.140895 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:26.140906 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:26.140920 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:26.140930 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:02:26.140942 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:02:26.140957 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:26.140968 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:26.140979 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:26.140990 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:26.141002 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:26.141013 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:26.141023 kernel: fuse: init (API version 7.34) Feb 9 19:02:26.141035 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:26.141046 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:26.141061 kernel: loop: module loaded Feb 9 19:02:26.141070 systemd[1]: Mounted media.mount. Feb 9 19:02:26.141082 kernel: audit: type=1305 audit(1707505346.136:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:26.141099 systemd-journald[1159]: Journal started Feb 9 19:02:26.141147 systemd-journald[1159]: Runtime Journal (/run/log/journal/a9ed9eb29cd743f6b94a47e02caa6b86) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:02:25.788000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:02:26.136000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:26.153600 systemd[1]: Started systemd-journald.service. Feb 9 19:02:26.136000 audit[1159]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff69f03e90 a2=4000 a3=7fff69f03f2c items=0 ppid=1 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:26.136000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:26.174639 kernel: audit: type=1300 audit(1707505346.136:93): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff69f03e90 a2=4000 a3=7fff69f03f2c items=0 ppid=1 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:26.174688 kernel: audit: type=1327 audit(1707505346.136:93): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:26.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.184682 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:26.199381 kernel: audit: type=1130 audit(1707505346.183:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.199639 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:26.201957 systemd[1]: Mounted tmp.mount. Feb 9 19:02:26.204097 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:26.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.206763 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:26.219896 kernel: audit: type=1130 audit(1707505346.206:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.222286 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:26.222496 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:26.236373 kernel: audit: type=1130 audit(1707505346.221:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.238134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:26.238429 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:26.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.252615 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:26.252857 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:26.268328 kernel: audit: type=1130 audit(1707505346.237:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.268391 kernel: audit: type=1131 audit(1707505346.237:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.268809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:26.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.269198 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:26.272332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:26.273151 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:26.275762 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:26.276192 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:26.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.279062 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:26.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.281859 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:26.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.285112 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:26.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.288171 systemd[1]: Reached target network-pre.target. Feb 9 19:02:26.292798 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:26.298268 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:26.301410 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:26.343535 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:26.347612 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:26.350233 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:26.351528 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:26.353968 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:26.355232 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:26.358766 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:26.366131 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:26.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.369215 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:26.371591 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:26.374901 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:26.389253 systemd-journald[1159]: Time spent on flushing to /var/log/journal/a9ed9eb29cd743f6b94a47e02caa6b86 is 30.005ms for 1132 entries. Feb 9 19:02:26.389253 systemd-journald[1159]: System Journal (/var/log/journal/a9ed9eb29cd743f6b94a47e02caa6b86) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:02:26.479806 systemd-journald[1159]: Received client request to flush runtime journal. Feb 9 19:02:26.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.482450 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:02:26.401885 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:26.404758 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:26.452911 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:26.481011 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:26.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.001066 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:27.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.006046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:02:27.306705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:02:27.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.554016 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:27.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.559800 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:27.582192 systemd-udevd[1214]: Using default interface naming scheme 'v252'. Feb 9 19:02:27.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.808488 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:27.814991 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:27.856133 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:02:27.910173 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:27.930596 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:27.943000 audit[1231]: AVC avc: denied { confidentiality } for pid=1231 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:27.953596 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:02:27.960595 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:02:27.943000 audit[1231]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5598a9a5b330 a1=f884 a2=7f44fd281bc5 a3=5 items=12 ppid=1214 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:27.943000 audit: CWD cwd="/" Feb 9 19:02:27.943000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=1 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=2 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=3 name=(null) inode=14221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=4 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=5 name=(null) inode=14222 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=6 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=7 name=(null) inode=14223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=8 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=9 name=(null) inode=14224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=10 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PATH item=11 name=(null) inode=14225 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.943000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:27.977219 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:02:27.977306 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:02:27.991204 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:27.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.002673 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:02:28.013174 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:02:28.013260 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:02:28.013289 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:02:28.380846 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:02:28.380958 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:02:28.386290 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:02:28.414047 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:02:28.562717 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1239) Feb 9 19:02:28.647865 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:02:28.678050 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:02:28.753997 systemd-networkd[1224]: lo: Link UP Feb 9 19:02:28.754014 systemd-networkd[1224]: lo: Gained carrier Feb 9 19:02:28.754788 systemd-networkd[1224]: Enumeration completed Feb 9 19:02:28.754986 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:28.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.759967 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:28.768545 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:28.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.773954 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:28.786677 systemd-networkd[1224]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:28.841037 kernel: mlx5_core e444:00:02.0 enP58436s1: Link up Feb 9 19:02:28.887049 kernel: hv_netvsc 000d3adf-7f9e-000d-3adf-7f9e000d3adf eth0: Data path switched to VF: enP58436s1 Feb 9 19:02:28.887784 systemd-networkd[1224]: enP58436s1: Link UP Feb 9 19:02:28.887969 systemd-networkd[1224]: eth0: Link UP Feb 9 19:02:28.887985 systemd-networkd[1224]: eth0: Gained carrier Feb 9 19:02:28.894407 systemd-networkd[1224]: enP58436s1: Gained carrier Feb 9 19:02:28.916164 systemd-networkd[1224]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:29.107542 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:29.134899 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:29.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.137992 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:29.142558 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:29.148344 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:29.171871 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:29.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.175346 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:29.178027 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:29.178069 systemd[1]: Reached target local-fs.target. Feb 9 19:02:29.180591 systemd[1]: Reached target machines.target. Feb 9 19:02:29.184673 systemd[1]: Starting ldconfig.service... Feb 9 19:02:29.187229 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.187361 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:29.189135 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:29.193033 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:29.197651 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:29.202248 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.202330 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.203950 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:29.226375 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1298 (bootctl) Feb 9 19:02:29.228103 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:29.252382 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:29.268766 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:29.276381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:29.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.288033 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:29.361359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:29.362591 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:29.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.148372 systemd-fsck[1307]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:30.148372 systemd-fsck[1307]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:02:30.149824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:30.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.156809 systemd[1]: Mounting boot.mount... Feb 9 19:02:30.177903 systemd[1]: Mounted boot.mount. Feb 9 19:02:30.194309 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:30.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.381214 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:30.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.386692 systemd[1]: Starting audit-rules.service... Feb 9 19:02:30.391402 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:30.398957 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:30.404445 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:30.409452 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:30.418065 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:30.421384 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:30.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.425570 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:30.437000 audit[1325]: SYSTEM_BOOT pid=1325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.443869 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:30.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.527902 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:30.530798 systemd[1]: Reached target time-set.target. Feb 9 19:02:30.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.607329 systemd-resolved[1323]: Positive Trust Anchors: Feb 9 19:02:30.607355 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:30.607393 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:30.609605 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:30.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.706000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:30.706000 audit[1342]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcbc990810 a2=420 a3=0 items=0 ppid=1318 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:30.706000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:30.707767 augenrules[1342]: No rules Feb 9 19:02:30.708394 systemd[1]: Finished audit-rules.service. Feb 9 19:02:30.728224 systemd-timesyncd[1324]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:30.728302 systemd-timesyncd[1324]: Initial clock synchronization to Fri 2024-02-09 19:02:30.728502 UTC. Feb 9 19:02:30.733304 systemd-resolved[1323]: Using system hostname 'ci-3510.3.2-a-00ed68a33d'. Feb 9 19:02:30.735238 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:30.738045 systemd[1]: Reached target network.target. Feb 9 19:02:30.740285 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:30.849381 systemd-networkd[1224]: eth0: Gained IPv6LL Feb 9 19:02:30.852549 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:30.856757 systemd[1]: Reached target network-online.target. Feb 9 19:02:36.040227 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:36.052792 systemd[1]: Finished ldconfig.service. Feb 9 19:02:36.058540 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:36.069008 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:36.071708 systemd[1]: Reached target sysinit.target. Feb 9 19:02:36.074228 systemd[1]: Started motdgen.path. Feb 9 19:02:36.076186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:36.079355 systemd[1]: Started logrotate.timer. Feb 9 19:02:36.081456 systemd[1]: Started mdadm.timer. Feb 9 19:02:36.083232 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:36.085472 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:36.085631 systemd[1]: Reached target paths.target. Feb 9 19:02:36.091166 systemd[1]: Reached target timers.target. Feb 9 19:02:36.097381 systemd[1]: Listening on dbus.socket. Feb 9 19:02:36.101318 systemd[1]: Starting docker.socket... Feb 9 19:02:36.104789 systemd[1]: Listening on sshd.socket. Feb 9 19:02:36.107433 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:36.108238 systemd[1]: Listening on docker.socket. Feb 9 19:02:36.110787 systemd[1]: Reached target sockets.target. Feb 9 19:02:36.113086 systemd[1]: Reached target basic.target. Feb 9 19:02:36.115439 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:02:36.115512 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:36.115543 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:36.117104 systemd[1]: Starting containerd.service... Feb 9 19:02:36.121065 systemd[1]: Starting dbus.service... Feb 9 19:02:36.125545 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:36.130820 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:36.133358 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:36.135207 systemd[1]: Starting motdgen.service... Feb 9 19:02:36.139718 systemd[1]: Started nvidia.service. Feb 9 19:02:36.143881 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:36.147944 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:36.152057 systemd[1]: Starting prepare-helm.service... Feb 9 19:02:36.157142 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:36.162503 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:36.170609 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:36.177532 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:36.177645 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:36.179982 systemd[1]: Starting update-engine.service... Feb 9 19:02:36.187121 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:36.202500 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:36.202853 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:36.228059 jq[1378]: true Feb 9 19:02:36.225581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:36.228508 jq[1357]: false Feb 9 19:02:36.225952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:36.265511 extend-filesystems[1358]: Found sda Feb 9 19:02:36.265511 extend-filesystems[1358]: Found sda1 Feb 9 19:02:36.265511 extend-filesystems[1358]: Found sda2 Feb 9 19:02:36.265511 extend-filesystems[1358]: Found sda3 Feb 9 19:02:36.265511 extend-filesystems[1358]: Found usr Feb 9 19:02:36.281878 extend-filesystems[1358]: Found sda4 Feb 9 19:02:36.281878 extend-filesystems[1358]: Found sda6 Feb 9 19:02:36.281878 extend-filesystems[1358]: Found sda7 Feb 9 19:02:36.281878 extend-filesystems[1358]: Found sda9 Feb 9 19:02:36.281878 extend-filesystems[1358]: Checking size of /dev/sda9 Feb 9 19:02:36.300657 jq[1390]: true Feb 9 19:02:36.316722 tar[1381]: ./ Feb 9 19:02:36.316722 tar[1381]: ./macvlan Feb 9 19:02:36.325080 tar[1383]: linux-amd64/helm Feb 9 19:02:36.327476 tar[1382]: crictl Feb 9 19:02:36.381785 systemd-logind[1374]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:36.388503 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:36.388895 systemd[1]: Finished motdgen.service. Feb 9 19:02:36.393557 systemd-logind[1374]: New seat seat0. Feb 9 19:02:36.404839 extend-filesystems[1358]: Old size kept for /dev/sda9 Feb 9 19:02:36.404839 extend-filesystems[1358]: Found sr0 Feb 9 19:02:36.405216 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:36.405511 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:36.440666 tar[1381]: ./static Feb 9 19:02:36.472739 dbus-daemon[1356]: [system] SELinux support is enabled Feb 9 19:02:36.487269 dbus-daemon[1356]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:36.473066 systemd[1]: Started dbus.service. Feb 9 19:02:36.477832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:36.477867 systemd[1]: Reached target system-config.target. Feb 9 19:02:36.480504 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:36.480534 systemd[1]: Reached target user-config.target. Feb 9 19:02:36.485911 systemd[1]: Started systemd-logind.service. Feb 9 19:02:36.512813 bash[1414]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:36.513800 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:36.534095 env[1412]: time="2024-02-09T19:02:36.533960912Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:36.572675 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:36.583715 tar[1381]: ./vlan Feb 9 19:02:36.611801 env[1412]: time="2024-02-09T19:02:36.611698794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:36.616577 env[1412]: time="2024-02-09T19:02:36.616524480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.618617 env[1412]: time="2024-02-09T19:02:36.618569916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:36.618755 env[1412]: time="2024-02-09T19:02:36.618737719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.619267 env[1412]: time="2024-02-09T19:02:36.619235328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:36.621941 env[1412]: time="2024-02-09T19:02:36.621913975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.622108 env[1412]: time="2024-02-09T19:02:36.622089479Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:36.622187 env[1412]: time="2024-02-09T19:02:36.622171280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.622375 env[1412]: time="2024-02-09T19:02:36.622354983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.622788 env[1412]: time="2024-02-09T19:02:36.622765891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:36.623204 env[1412]: time="2024-02-09T19:02:36.623177198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:36.623299 env[1412]: time="2024-02-09T19:02:36.623284700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:36.623420 env[1412]: time="2024-02-09T19:02:36.623405002Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:36.623494 env[1412]: time="2024-02-09T19:02:36.623482403Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:36.646729 env[1412]: time="2024-02-09T19:02:36.646579414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:36.646729 env[1412]: time="2024-02-09T19:02:36.646650415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:36.646729 env[1412]: time="2024-02-09T19:02:36.646673316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647072923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647181425Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647207925Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647231025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647251926Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647272226Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647292527Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647311427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647334127Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647494430Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:36.647761 env[1412]: time="2024-02-09T19:02:36.647605532Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:36.649042 env[1412]: time="2024-02-09T19:02:36.649001257Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:36.649210 env[1412]: time="2024-02-09T19:02:36.649192360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649296 env[1412]: time="2024-02-09T19:02:36.649282862Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:36.649435 env[1412]: time="2024-02-09T19:02:36.649416364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649518 env[1412]: time="2024-02-09T19:02:36.649502366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649593 env[1412]: time="2024-02-09T19:02:36.649579967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649655 env[1412]: time="2024-02-09T19:02:36.649643568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649724 env[1412]: time="2024-02-09T19:02:36.649708269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649796 env[1412]: time="2024-02-09T19:02:36.649783271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649855 env[1412]: time="2024-02-09T19:02:36.649844472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.649926 env[1412]: time="2024-02-09T19:02:36.649911173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.650013 env[1412]: time="2024-02-09T19:02:36.650000275Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:36.650255 env[1412]: time="2024-02-09T19:02:36.650238579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.650354 env[1412]: time="2024-02-09T19:02:36.650335681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.650459 env[1412]: time="2024-02-09T19:02:36.650444483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.650535 env[1412]: time="2024-02-09T19:02:36.650521984Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:36.650614 env[1412]: time="2024-02-09T19:02:36.650598285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:36.650670 env[1412]: time="2024-02-09T19:02:36.650658986Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:36.650759 env[1412]: time="2024-02-09T19:02:36.650742288Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:36.650864 env[1412]: time="2024-02-09T19:02:36.650850490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:36.651318 env[1412]: time="2024-02-09T19:02:36.651201996Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.651502901Z" level=info msg="Connect containerd service" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.651569403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.652384717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.652830625Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.652890026Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.652948527Z" level=info msg="containerd successfully booted in 0.163197s" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653495237Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653736341Z" level=info msg="Start recovering state" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653818043Z" level=info msg="Start event monitor" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653844143Z" level=info msg="Start snapshots syncer" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653862043Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:36.688439 env[1412]: time="2024-02-09T19:02:36.653872043Z" level=info msg="Start streaming server" Feb 9 19:02:36.653606 systemd[1]: Started containerd.service. Feb 9 19:02:36.714903 tar[1381]: ./portmap Feb 9 19:02:36.817136 tar[1381]: ./host-local Feb 9 19:02:36.928556 tar[1381]: ./vrf Feb 9 19:02:37.005301 tar[1381]: ./bridge Feb 9 19:02:37.085338 tar[1381]: ./tuning Feb 9 19:02:37.089304 update_engine[1376]: I0209 19:02:37.088817 1376 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:37.142809 systemd[1]: Started update-engine.service. Feb 9 19:02:37.151627 update_engine[1376]: I0209 19:02:37.142887 1376 update_check_scheduler.cc:74] Next update check in 2m0s Feb 9 19:02:37.148400 systemd[1]: Started locksmithd.service. Feb 9 19:02:37.170198 tar[1381]: ./firewall Feb 9 19:02:37.259397 tar[1381]: ./host-device Feb 9 19:02:37.339265 tar[1381]: ./sbr Feb 9 19:02:37.411110 tar[1381]: ./loopback Feb 9 19:02:37.483190 tar[1381]: ./dhcp Feb 9 19:02:37.654792 tar[1383]: linux-amd64/LICENSE Feb 9 19:02:37.658471 tar[1383]: linux-amd64/README.md Feb 9 19:02:37.680410 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:37.684531 systemd[1]: Finished prepare-helm.service. Feb 9 19:02:37.703506 tar[1381]: ./ptp Feb 9 19:02:37.746754 tar[1381]: ./ipvlan Feb 9 19:02:37.788314 tar[1381]: ./bandwidth Feb 9 19:02:37.879086 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:38.245140 sshd_keygen[1379]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:38.268385 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:38.273415 systemd[1]: Starting issuegen.service... Feb 9 19:02:38.280869 systemd[1]: Started waagent.service. Feb 9 19:02:38.287381 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:38.287776 systemd[1]: Finished issuegen.service. Feb 9 19:02:38.293624 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:38.316676 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:38.324617 systemd[1]: Started getty@tty1.service. Feb 9 19:02:38.329659 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:38.332840 systemd[1]: Reached target getty.target. Feb 9 19:02:38.335139 systemd[1]: Reached target multi-user.target. Feb 9 19:02:38.339681 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:38.349744 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:38.350297 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:38.356428 systemd[1]: Startup finished in 876ms (firmware) + 27.944s (loader) + 2min 14ms (kernel) + 23.260s (userspace) = 2min 52.096s. Feb 9 19:02:38.674224 login[1504]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:02:38.676107 login[1505]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:38.701810 systemd[1]: Created slice user-500.slice. Feb 9 19:02:38.703316 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:38.709089 systemd-logind[1374]: New session 2 of user core. Feb 9 19:02:38.717350 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:38.720412 systemd[1]: Starting user@500.service... Feb 9 19:02:38.744677 (systemd)[1511]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:38.867768 systemd[1511]: Queued start job for default target default.target. Feb 9 19:02:38.868679 systemd[1511]: Reached target paths.target. Feb 9 19:02:38.868708 systemd[1511]: Reached target sockets.target. Feb 9 19:02:38.868725 systemd[1511]: Reached target timers.target. Feb 9 19:02:38.868740 systemd[1511]: Reached target basic.target. Feb 9 19:02:38.868819 systemd[1511]: Reached target default.target. Feb 9 19:02:38.868857 systemd[1511]: Startup finished in 115ms. Feb 9 19:02:38.868944 systemd[1]: Started user@500.service. Feb 9 19:02:38.870244 systemd[1]: Started session-2.scope. Feb 9 19:02:39.250421 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:39.674786 login[1504]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:39.680806 systemd[1]: Started session-1.scope. Feb 9 19:02:39.681499 systemd-logind[1374]: New session 1 of user core. Feb 9 19:02:44.184895 waagent[1494]: 2024-02-09T19:02:44.184734Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:02:44.190305 waagent[1494]: 2024-02-09T19:02:44.190195Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:02:44.193184 waagent[1494]: 2024-02-09T19:02:44.193108Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:02:44.196149 waagent[1494]: 2024-02-09T19:02:44.196068Z INFO Daemon Daemon Run daemon Feb 9 19:02:44.198936 waagent[1494]: 2024-02-09T19:02:44.198868Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:02:44.214056 waagent[1494]: 2024-02-09T19:02:44.213902Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:44.222308 waagent[1494]: 2024-02-09T19:02:44.222172Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.222743Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.223757Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.225600Z INFO Daemon Daemon Activate resource disk Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.226470Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.234349Z INFO Daemon Daemon Found device: None Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.235106Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.235997Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.237903Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:44.254500 waagent[1494]: 2024-02-09T19:02:44.238840Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:02:44.257468 waagent[1494]: 2024-02-09T19:02:44.257308Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:44.265485 waagent[1494]: 2024-02-09T19:02:44.265341Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:44.274589 waagent[1494]: 2024-02-09T19:02:44.265858Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:44.274589 waagent[1494]: 2024-02-09T19:02:44.266743Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:02:44.370777 waagent[1494]: 2024-02-09T19:02:44.370593Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:02:44.482587 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:02:44.502753 waagent[1494]: 2024-02-09T19:02:44.502605Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:02:44.505956 waagent[1494]: 2024-02-09T19:02:44.505874Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:44.509653 waagent[1494]: 2024-02-09T19:02:44.509587Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:02:44.513379 waagent[1494]: 2024-02-09T19:02:44.513315Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:02:44.516650 waagent[1494]: 2024-02-09T19:02:44.516583Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:02:44.519575 waagent[1494]: 2024-02-09T19:02:44.519517Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:02:44.634659 waagent[1494]: 2024-02-09T19:02:44.634567Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:02:44.638871 waagent[1494]: 2024-02-09T19:02:44.638819Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:02:44.641641 waagent[1494]: 2024-02-09T19:02:44.641570Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:02:45.379477 waagent[1494]: 2024-02-09T19:02:45.379293Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:02:45.389063 waagent[1494]: 2024-02-09T19:02:45.388967Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:02:45.394834 waagent[1494]: 2024-02-09T19:02:45.389492Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:02:45.506178 waagent[1494]: 2024-02-09T19:02:45.505996Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:45.510875 waagent[1494]: 2024-02-09T19:02:45.510792Z INFO Daemon Daemon Certificate with thumbprint F1B23E8A2E3AACD013F18AE5BCAB699CEAF17890 has no matching private key. Feb 9 19:02:45.516238 waagent[1494]: 2024-02-09T19:02:45.516171Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:02:45.568484 waagent[1494]: 2024-02-09T19:02:45.568384Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 41fc7cd2-abd9-4b66-916c-2fd3d8598eb3 New eTag: 275119114633385776] Feb 9 19:02:45.575307 waagent[1494]: 2024-02-09T19:02:45.575201Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:45.589617 waagent[1494]: 2024-02-09T19:02:45.589536Z INFO Daemon Daemon Starting provisioning Feb 9 19:02:45.592695 waagent[1494]: 2024-02-09T19:02:45.592601Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:02:45.595447 waagent[1494]: 2024-02-09T19:02:45.595365Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-00ed68a33d] Feb 9 19:02:45.619373 waagent[1494]: 2024-02-09T19:02:45.619215Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-00ed68a33d] Feb 9 19:02:45.623795 waagent[1494]: 2024-02-09T19:02:45.623693Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:02:45.627688 waagent[1494]: 2024-02-09T19:02:45.627613Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:02:45.644254 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:02:45.644600 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:02:45.644685 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:02:45.645032 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:45.651067 systemd-networkd[1224]: eth0: DHCPv6 lease lost Feb 9 19:02:45.653048 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:45.653400 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:45.657056 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:45.695902 systemd-networkd[1556]: enP58436s1: Link UP Feb 9 19:02:45.695915 systemd-networkd[1556]: enP58436s1: Gained carrier Feb 9 19:02:45.697364 systemd-networkd[1556]: eth0: Link UP Feb 9 19:02:45.697373 systemd-networkd[1556]: eth0: Gained carrier Feb 9 19:02:45.697829 systemd-networkd[1556]: lo: Link UP Feb 9 19:02:45.697839 systemd-networkd[1556]: lo: Gained carrier Feb 9 19:02:45.698193 systemd-networkd[1556]: eth0: Gained IPv6LL Feb 9 19:02:45.698693 systemd-networkd[1556]: Enumeration completed Feb 9 19:02:45.698886 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:45.702078 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:45.712341 waagent[1494]: 2024-02-09T19:02:45.705138Z INFO Daemon Daemon Create user account if not exists Feb 9 19:02:45.712341 waagent[1494]: 2024-02-09T19:02:45.709118Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:02:45.710148 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:45.713181 waagent[1494]: 2024-02-09T19:02:45.713076Z INFO Daemon Daemon Configure sudoer Feb 9 19:02:45.719764 waagent[1494]: 2024-02-09T19:02:45.713884Z INFO Daemon Daemon Configure sshd Feb 9 19:02:45.719764 waagent[1494]: 2024-02-09T19:02:45.714861Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:02:45.740117 systemd-networkd[1556]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:45.743978 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:46.996539 waagent[1494]: 2024-02-09T19:02:46.996439Z INFO Daemon Daemon Provisioning complete Feb 9 19:02:47.013509 waagent[1494]: 2024-02-09T19:02:47.013422Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:02:47.021391 waagent[1494]: 2024-02-09T19:02:47.013956Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:02:47.021391 waagent[1494]: 2024-02-09T19:02:47.015945Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:02:47.293480 waagent[1566]: 2024-02-09T19:02:47.293359Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:02:47.294340 waagent[1566]: 2024-02-09T19:02:47.294263Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:47.294496 waagent[1566]: 2024-02-09T19:02:47.294439Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:47.306088 waagent[1566]: 2024-02-09T19:02:47.306001Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:02:47.306271 waagent[1566]: 2024-02-09T19:02:47.306218Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:02:47.369924 waagent[1566]: 2024-02-09T19:02:47.369775Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:47.370198 waagent[1566]: 2024-02-09T19:02:47.370122Z INFO ExtHandler ExtHandler Certificate with thumbprint F1B23E8A2E3AACD013F18AE5BCAB699CEAF17890 has no matching private key. Feb 9 19:02:47.370450 waagent[1566]: 2024-02-09T19:02:47.370397Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:02:47.384387 waagent[1566]: 2024-02-09T19:02:47.384320Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 4ed603db-cf52-4ac0-a98f-299e22b37a0d New eTag: 275119114633385776] Feb 9 19:02:47.385004 waagent[1566]: 2024-02-09T19:02:47.384938Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:47.476674 waagent[1566]: 2024-02-09T19:02:47.476476Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:47.492882 waagent[1566]: 2024-02-09T19:02:47.488719Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1566 Feb 9 19:02:47.496073 waagent[1566]: 2024-02-09T19:02:47.494433Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:47.497263 waagent[1566]: 2024-02-09T19:02:47.496497Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:47.579569 waagent[1566]: 2024-02-09T19:02:47.579418Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:47.582567 waagent[1566]: 2024-02-09T19:02:47.582480Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:47.591667 waagent[1566]: 2024-02-09T19:02:47.591607Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:47.592227 waagent[1566]: 2024-02-09T19:02:47.592159Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:47.593399 waagent[1566]: 2024-02-09T19:02:47.593331Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:02:47.594806 waagent[1566]: 2024-02-09T19:02:47.594745Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:47.595492 waagent[1566]: 2024-02-09T19:02:47.595433Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:47.595753 waagent[1566]: 2024-02-09T19:02:47.595696Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:47.596471 waagent[1566]: 2024-02-09T19:02:47.596417Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:47.596697 waagent[1566]: 2024-02-09T19:02:47.596637Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:47.596872 waagent[1566]: 2024-02-09T19:02:47.596804Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:47.597305 waagent[1566]: 2024-02-09T19:02:47.597250Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:47.597929 waagent[1566]: 2024-02-09T19:02:47.597875Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:47.598279 waagent[1566]: 2024-02-09T19:02:47.598225Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:47.599406 waagent[1566]: 2024-02-09T19:02:47.599347Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:47.599489 waagent[1566]: 2024-02-09T19:02:47.599423Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:47.599489 waagent[1566]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:47.599489 waagent[1566]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:47.599489 waagent[1566]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:47.599489 waagent[1566]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:47.599489 waagent[1566]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:47.599489 waagent[1566]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:47.601883 waagent[1566]: 2024-02-09T19:02:47.601781Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:47.602154 waagent[1566]: 2024-02-09T19:02:47.602099Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:47.603092 waagent[1566]: 2024-02-09T19:02:47.603012Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:47.603253 waagent[1566]: 2024-02-09T19:02:47.603199Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:47.603708 waagent[1566]: 2024-02-09T19:02:47.603656Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:47.613074 waagent[1566]: 2024-02-09T19:02:47.613004Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:02:47.614528 waagent[1566]: 2024-02-09T19:02:47.614486Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:47.615451 waagent[1566]: 2024-02-09T19:02:47.615404Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:02:47.641396 waagent[1566]: 2024-02-09T19:02:47.641268Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1556' Feb 9 19:02:47.668342 waagent[1566]: 2024-02-09T19:02:47.668257Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:02:47.736525 waagent[1566]: 2024-02-09T19:02:47.736389Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:47.736525 waagent[1566]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:47.736525 waagent[1566]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:47.736525 waagent[1566]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:7f:9e brd ff:ff:ff:ff:ff:ff Feb 9 19:02:47.736525 waagent[1566]: 3: enP58436s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:7f:9e brd ff:ff:ff:ff:ff:ff\ altname enP58436p0s2 Feb 9 19:02:47.736525 waagent[1566]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:47.736525 waagent[1566]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:47.736525 waagent[1566]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:47.736525 waagent[1566]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:47.736525 waagent[1566]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:47.736525 waagent[1566]: 2: eth0 inet6 fe80::20d:3aff:fedf:7f9e/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:47.959938 waagent[1566]: 2024-02-09T19:02:47.959717Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:02:47.963791 waagent[1566]: 2024-02-09T19:02:47.963652Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:02:47.963791 waagent[1566]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:47.963791 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 19:02:47.963791 waagent[1566]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:47.963791 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 19:02:47.963791 waagent[1566]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:47.963791 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 19:02:47.963791 waagent[1566]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:47.963791 waagent[1566]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:47.965293 waagent[1566]: 2024-02-09T19:02:47.965230Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:02:47.985357 waagent[1566]: 2024-02-09T19:02:47.985280Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:02:49.020797 waagent[1494]: 2024-02-09T19:02:49.020557Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:02:49.027658 waagent[1494]: 2024-02-09T19:02:49.027568Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:02:50.100876 waagent[1610]: 2024-02-09T19:02:50.100732Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:02:50.101732 waagent[1610]: 2024-02-09T19:02:50.101656Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:02:50.101888 waagent[1610]: 2024-02-09T19:02:50.101833Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:02:50.112632 waagent[1610]: 2024-02-09T19:02:50.112503Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:50.113980 waagent[1610]: 2024-02-09T19:02:50.113057Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:50.113980 waagent[1610]: 2024-02-09T19:02:50.113428Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:50.126454 waagent[1610]: 2024-02-09T19:02:50.126368Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:02:50.138510 waagent[1610]: 2024-02-09T19:02:50.138435Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:02:50.139644 waagent[1610]: 2024-02-09T19:02:50.139576Z INFO ExtHandler Feb 9 19:02:50.139813 waagent[1610]: 2024-02-09T19:02:50.139757Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: eb8fadde-0daa-4b39-a1d6-94b9cac6e350 eTag: 275119114633385776 source: Fabric] Feb 9 19:02:50.140573 waagent[1610]: 2024-02-09T19:02:50.140514Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:02:50.141721 waagent[1610]: 2024-02-09T19:02:50.141658Z INFO ExtHandler Feb 9 19:02:50.141858 waagent[1610]: 2024-02-09T19:02:50.141808Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:02:50.148967 waagent[1610]: 2024-02-09T19:02:50.148911Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:02:50.149487 waagent[1610]: 2024-02-09T19:02:50.149435Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:50.169846 waagent[1610]: 2024-02-09T19:02:50.169746Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:02:50.241126 waagent[1610]: 2024-02-09T19:02:50.240926Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F1B23E8A2E3AACD013F18AE5BCAB699CEAF17890', 'hasPrivateKey': False} Feb 9 19:02:50.242310 waagent[1610]: 2024-02-09T19:02:50.242231Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:02:50.243403 waagent[1610]: 2024-02-09T19:02:50.243338Z INFO ExtHandler Fetch goal state completed Feb 9 19:02:50.268869 waagent[1610]: 2024-02-09T19:02:50.268757Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1610 Feb 9 19:02:50.272563 waagent[1610]: 2024-02-09T19:02:50.272477Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:50.274113 waagent[1610]: 2024-02-09T19:02:50.274048Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:50.280297 waagent[1610]: 2024-02-09T19:02:50.280234Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:50.280742 waagent[1610]: 2024-02-09T19:02:50.280680Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:50.290416 waagent[1610]: 2024-02-09T19:02:50.290353Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:50.291005 waagent[1610]: 2024-02-09T19:02:50.290943Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:50.320132 waagent[1610]: 2024-02-09T19:02:50.319931Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:02:50.324224 waagent[1610]: 2024-02-09T19:02:50.324079Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:02:50.329505 waagent[1610]: 2024-02-09T19:02:50.329430Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:02:50.331210 waagent[1610]: 2024-02-09T19:02:50.331140Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:50.331413 waagent[1610]: 2024-02-09T19:02:50.331342Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:50.333155 waagent[1610]: 2024-02-09T19:02:50.333092Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:50.333272 waagent[1610]: 2024-02-09T19:02:50.333220Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:50.333515 waagent[1610]: 2024-02-09T19:02:50.333456Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:50.334184 waagent[1610]: 2024-02-09T19:02:50.334123Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:50.334761 waagent[1610]: 2024-02-09T19:02:50.334698Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:50.334999 waagent[1610]: 2024-02-09T19:02:50.334931Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:50.335257 waagent[1610]: 2024-02-09T19:02:50.335204Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:50.335257 waagent[1610]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:50.335257 waagent[1610]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:50.335257 waagent[1610]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:50.335257 waagent[1610]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:50.335257 waagent[1610]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:50.335257 waagent[1610]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:50.338488 waagent[1610]: 2024-02-09T19:02:50.338167Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:50.339428 waagent[1610]: 2024-02-09T19:02:50.339367Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:50.339880 waagent[1610]: 2024-02-09T19:02:50.339822Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:50.340464 waagent[1610]: 2024-02-09T19:02:50.340400Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:50.341329 waagent[1610]: 2024-02-09T19:02:50.341258Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:50.341936 waagent[1610]: 2024-02-09T19:02:50.341879Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:50.345183 waagent[1610]: 2024-02-09T19:02:50.344806Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:50.370872 waagent[1610]: 2024-02-09T19:02:50.370700Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:02:50.372898 waagent[1610]: 2024-02-09T19:02:50.372824Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:02:50.385597 waagent[1610]: 2024-02-09T19:02:50.385506Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:50.385597 waagent[1610]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:50.385597 waagent[1610]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:50.385597 waagent[1610]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:7f:9e brd ff:ff:ff:ff:ff:ff Feb 9 19:02:50.385597 waagent[1610]: 3: enP58436s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:7f:9e brd ff:ff:ff:ff:ff:ff\ altname enP58436p0s2 Feb 9 19:02:50.385597 waagent[1610]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:50.385597 waagent[1610]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:50.385597 waagent[1610]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:50.385597 waagent[1610]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:50.385597 waagent[1610]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:50.385597 waagent[1610]: 2: eth0 inet6 fe80::20d:3aff:fedf:7f9e/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:50.452776 waagent[1610]: 2024-02-09T19:02:50.452708Z INFO ExtHandler ExtHandler Feb 9 19:02:50.457075 waagent[1610]: 2024-02-09T19:02:50.455535Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 13b77be8-c5f2-4d4a-bb96-e049aeb4cbeb correlation 35f65358-9ec5-4c99-99dd-149baf71a628 created: 2024-02-09T18:59:37.671463Z] Feb 9 19:02:50.463414 waagent[1610]: 2024-02-09T19:02:50.463096Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:02:50.468872 waagent[1610]: 2024-02-09T19:02:50.468797Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 16 ms] Feb 9 19:02:50.469229 waagent[1610]: 2024-02-09T19:02:50.469163Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:02:50.469229 waagent[1610]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:50.469229 waagent[1610]: pkts bytes target prot opt in out source destination Feb 9 19:02:50.469229 waagent[1610]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:50.469229 waagent[1610]: pkts bytes target prot opt in out source destination Feb 9 19:02:50.469229 waagent[1610]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:50.469229 waagent[1610]: pkts bytes target prot opt in out source destination Feb 9 19:02:50.469229 waagent[1610]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:50.469229 waagent[1610]: 103 12340 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:50.469229 waagent[1610]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:50.493971 waagent[1610]: 2024-02-09T19:02:50.493882Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:02:50.508300 waagent[1610]: 2024-02-09T19:02:50.508203Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 81E54857-EFBC-4426-A10D-632E7A13A621;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:03:15.745006 systemd[1]: Created slice system-sshd.slice. Feb 9 19:03:15.747334 systemd[1]: Started sshd@0-10.200.8.37:22-10.200.12.6:37826.service. Feb 9 19:03:16.392161 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:03:16.578402 sshd[1648]: Accepted publickey for core from 10.200.12.6 port 37826 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:16.580422 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:16.586077 systemd[1]: Started session-3.scope. Feb 9 19:03:16.587043 systemd-logind[1374]: New session 3 of user core. Feb 9 19:03:17.114281 systemd[1]: Started sshd@1-10.200.8.37:22-10.200.12.6:36840.service. Feb 9 19:03:17.748868 sshd[1653]: Accepted publickey for core from 10.200.12.6 port 36840 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:17.750928 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:17.756930 systemd[1]: Started session-4.scope. Feb 9 19:03:17.757250 systemd-logind[1374]: New session 4 of user core. Feb 9 19:03:18.192668 sshd[1653]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:18.196505 systemd[1]: sshd@1-10.200.8.37:22-10.200.12.6:36840.service: Deactivated successfully. Feb 9 19:03:18.197994 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:03:18.199313 systemd-logind[1374]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:03:18.200808 systemd-logind[1374]: Removed session 4. Feb 9 19:03:18.295922 systemd[1]: Started sshd@2-10.200.8.37:22-10.200.12.6:36856.service. Feb 9 19:03:18.927319 sshd[1660]: Accepted publickey for core from 10.200.12.6 port 36856 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:18.929340 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:18.935206 systemd[1]: Started session-5.scope. Feb 9 19:03:18.935468 systemd-logind[1374]: New session 5 of user core. Feb 9 19:03:19.362879 sshd[1660]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:19.367468 systemd[1]: sshd@2-10.200.8.37:22-10.200.12.6:36856.service: Deactivated successfully. Feb 9 19:03:19.369292 systemd-logind[1374]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:19.369296 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:19.370715 systemd-logind[1374]: Removed session 5. Feb 9 19:03:19.467648 systemd[1]: Started sshd@3-10.200.8.37:22-10.200.12.6:36872.service. Feb 9 19:03:20.083439 sshd[1667]: Accepted publickey for core from 10.200.12.6 port 36872 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:20.085500 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.091510 systemd[1]: Started session-6.scope. Feb 9 19:03:20.091781 systemd-logind[1374]: New session 6 of user core. Feb 9 19:03:20.521989 sshd[1667]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:20.525842 systemd[1]: sshd@3-10.200.8.37:22-10.200.12.6:36872.service: Deactivated successfully. Feb 9 19:03:20.527285 systemd-logind[1374]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:20.527386 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:20.528993 systemd-logind[1374]: Removed session 6. Feb 9 19:03:20.626462 systemd[1]: Started sshd@4-10.200.8.37:22-10.200.12.6:36878.service. Feb 9 19:03:21.254379 sshd[1674]: Accepted publickey for core from 10.200.12.6 port 36878 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:21.256328 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:21.262100 systemd[1]: Started session-7.scope. Feb 9 19:03:21.262352 systemd-logind[1374]: New session 7 of user core. Feb 9 19:03:21.843444 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:03:21.843813 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:21.871323 dbus-daemon[1356]: \xd0\u001d\xa4ǔU: received setenforce notice (enforcing=-348973616) Feb 9 19:03:21.873621 sudo[1678]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:21.992070 sshd[1674]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:21.996642 systemd[1]: sshd@4-10.200.8.37:22-10.200.12.6:36878.service: Deactivated successfully. Feb 9 19:03:21.998948 systemd-logind[1374]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:03:21.999123 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:03:22.000775 systemd-logind[1374]: Removed session 7. Feb 9 19:03:22.081674 update_engine[1376]: I0209 19:03:22.081586 1376 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:22.095351 systemd[1]: Started sshd@5-10.200.8.37:22-10.200.12.6:36880.service. Feb 9 19:03:22.754767 sshd[1686]: Accepted publickey for core from 10.200.12.6 port 36880 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:22.756854 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:22.763420 systemd[1]: Started session-8.scope. Feb 9 19:03:22.763986 systemd-logind[1374]: New session 8 of user core. Feb 9 19:03:23.095682 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:03:23.095962 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:23.099659 sudo[1726]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:23.105186 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:03:23.105473 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:23.116753 systemd[1]: Stopping audit-rules.service... Feb 9 19:03:23.117000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:03:23.118956 auditctl[1729]: No rules Feb 9 19:03:23.122105 kernel: kauditd_printk_skb: 55 callbacks suppressed Feb 9 19:03:23.122204 kernel: audit: type=1305 audit(1707505403.117:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:03:23.119447 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:03:23.119716 systemd[1]: Stopped audit-rules.service. Feb 9 19:03:23.121767 systemd[1]: Starting audit-rules.service... Feb 9 19:03:23.145874 kernel: audit: type=1300 audit(1707505403.117:137): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe86cbc120 a2=420 a3=0 items=0 ppid=1 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:23.117000 audit[1729]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe86cbc120 a2=420 a3=0 items=0 ppid=1 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:23.117000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:03:23.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.155158 augenrules[1747]: No rules Feb 9 19:03:23.156318 systemd[1]: Finished audit-rules.service. Feb 9 19:03:23.159079 sudo[1725]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:23.163343 kernel: audit: type=1327 audit(1707505403.117:137): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:03:23.163420 kernel: audit: type=1131 audit(1707505403.117:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.174965 kernel: audit: type=1130 audit(1707505403.155:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.158000 audit[1725]: USER_END pid=1725 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.158000 audit[1725]: CRED_DISP pid=1725 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.200920 kernel: audit: type=1106 audit(1707505403.158:140): pid=1725 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.201044 kernel: audit: type=1104 audit(1707505403.158:141): pid=1725 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.258356 sshd[1686]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:23.258000 audit[1686]: USER_END pid=1686 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:23.262281 systemd[1]: sshd@5-10.200.8.37:22-10.200.12.6:36880.service: Deactivated successfully. Feb 9 19:03:23.263422 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:03:23.265152 systemd-logind[1374]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:03:23.266215 systemd-logind[1374]: Removed session 8. Feb 9 19:03:23.259000 audit[1686]: CRED_DISP pid=1686 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:23.278044 kernel: audit: type=1106 audit(1707505403.258:142): pid=1686 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:23.278094 kernel: audit: type=1104 audit(1707505403.259:143): pid=1686 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:23.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.37:22-10.200.12.6:36880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.292408 kernel: audit: type=1131 audit(1707505403.259:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.37:22-10.200.12.6:36880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:23.364208 systemd[1]: Started sshd@6-10.200.8.37:22-10.200.12.6:36890.service. Feb 9 19:03:23.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.37:22-10.200.12.6:36890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:24.005000 audit[1754]: USER_ACCT pid=1754 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:24.006919 sshd[1754]: Accepted publickey for core from 10.200.12.6 port 36890 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:24.007000 audit[1754]: CRED_ACQ pid=1754 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:24.007000 audit[1754]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b277dd0 a2=3 a3=0 items=0 ppid=1 pid=1754 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:24.007000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:24.008896 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:24.015108 systemd-logind[1374]: New session 9 of user core. Feb 9 19:03:24.015441 systemd[1]: Started session-9.scope. Feb 9 19:03:24.023000 audit[1754]: USER_START pid=1754 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:24.024000 audit[1757]: CRED_ACQ pid=1757 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:03:24.349000 audit[1758]: USER_ACCT pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:24.350000 audit[1758]: CRED_REFR pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:24.351074 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:03:24.351422 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:24.352000 audit[1758]: USER_START pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:03:25.058547 systemd[1]: Starting docker.service... Feb 9 19:03:25.115096 env[1773]: time="2024-02-09T19:03:25.115010908Z" level=info msg="Starting up" Feb 9 19:03:25.120201 env[1773]: time="2024-02-09T19:03:25.120168912Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:25.120391 env[1773]: time="2024-02-09T19:03:25.120375612Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:25.120458 env[1773]: time="2024-02-09T19:03:25.120444912Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:25.120500 env[1773]: time="2024-02-09T19:03:25.120492312Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:25.122493 env[1773]: time="2024-02-09T19:03:25.122442014Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:25.122493 env[1773]: time="2024-02-09T19:03:25.122481614Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:25.122644 env[1773]: time="2024-02-09T19:03:25.122503414Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:25.122644 env[1773]: time="2024-02-09T19:03:25.122518214Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:25.130613 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2239159841-merged.mount: Deactivated successfully. Feb 9 19:03:25.241340 env[1773]: time="2024-02-09T19:03:25.241287303Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:03:25.241340 env[1773]: time="2024-02-09T19:03:25.241321703Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:03:25.241663 env[1773]: time="2024-02-09T19:03:25.241641104Z" level=info msg="Loading containers: start." Feb 9 19:03:25.278000 audit[1801]: NETFILTER_CFG table=nat:6 family=2 entries=2 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.278000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc0d46ee40 a2=0 a3=7ffc0d46ee2c items=0 ppid=1773 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.278000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:03:25.280000 audit[1803]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.280000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc396ee2c0 a2=0 a3=7ffc396ee2ac items=0 ppid=1773 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:03:25.282000 audit[1805]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.282000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd991c5360 a2=0 a3=7ffd991c534c items=0 ppid=1773 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:03:25.284000 audit[1807]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.284000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd98936df0 a2=0 a3=7ffd98936ddc items=0 ppid=1773 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:03:25.286000 audit[1809]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.286000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8d22f9b0 a2=0 a3=7ffd8d22f99c items=0 ppid=1773 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.286000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:03:25.288000 audit[1811]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_rule pid=1811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.288000 audit[1811]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc32041b20 a2=0 a3=7ffc32041b0c items=0 ppid=1773 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.288000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:03:25.306000 audit[1813]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.306000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9d094540 a2=0 a3=7ffe9d09452c items=0 ppid=1773 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.306000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:03:25.308000 audit[1815]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.308000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffd3dae950 a2=0 a3=7fffd3dae93c items=0 ppid=1773 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.308000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:03:25.310000 audit[1817]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.310000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcc049be80 a2=0 a3=7ffcc049be6c items=0 ppid=1773 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:03:25.330000 audit[1821]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_unregister_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.330000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff93dd1540 a2=0 a3=7fff93dd152c items=0 ppid=1773 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.330000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:03:25.331000 audit[1822]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.331000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdebb2e6e0 a2=0 a3=7ffdebb2e6cc items=0 ppid=1773 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.331000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:03:25.362040 kernel: Initializing XFRM netlink socket Feb 9 19:03:25.406245 env[1773]: time="2024-02-09T19:03:25.406199627Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:03:25.479000 audit[1830]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.479000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffefcb87d50 a2=0 a3=7ffefcb87d3c items=0 ppid=1773 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.479000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:03:25.490000 audit[1833]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.490000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffebc418310 a2=0 a3=7ffebc4182fc items=0 ppid=1773 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.490000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:03:25.493000 audit[1836]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.493000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd265f9fd0 a2=0 a3=7ffd265f9fbc items=0 ppid=1773 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.493000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:03:25.495000 audit[1838]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.495000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffda0f04ea0 a2=0 a3=7ffda0f04e8c items=0 ppid=1773 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.495000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:03:25.497000 audit[1840]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.497000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffebb058f30 a2=0 a3=7ffebb058f1c items=0 ppid=1773 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.497000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:03:25.499000 audit[1842]: NETFILTER_CFG table=nat:22 family=2 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.499000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe0370b7a0 a2=0 a3=7ffe0370b78c items=0 ppid=1773 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.499000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:03:25.501000 audit[1844]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.501000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff8c7d71f0 a2=0 a3=7fff8c7d71dc items=0 ppid=1773 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.501000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:03:25.503000 audit[1846]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.503000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcec1e22a0 a2=0 a3=7ffcec1e228c items=0 ppid=1773 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.503000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:03:25.505000 audit[1848]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.505000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff284cc2a0 a2=0 a3=7fff284cc28c items=0 ppid=1773 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.505000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:03:25.507000 audit[1850]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.507000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc373301b0 a2=0 a3=7ffc3733019c items=0 ppid=1773 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.507000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:03:25.509000 audit[1852]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.509000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd219f37d0 a2=0 a3=7ffd219f37bc items=0 ppid=1773 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:03:25.511148 systemd-networkd[1556]: docker0: Link UP Feb 9 19:03:25.528000 audit[1856]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_unregister_rule pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.528000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffc047c460 a2=0 a3=7fffc047c44c items=0 ppid=1773 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.528000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:03:25.529000 audit[1857]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:25.529000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb95cc190 a2=0 a3=7ffcb95cc17c items=0 ppid=1773 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.529000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:03:25.530689 env[1773]: time="2024-02-09T19:03:25.530646221Z" level=info msg="Loading containers: done." Feb 9 19:03:25.543326 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck595455688-merged.mount: Deactivated successfully. Feb 9 19:03:25.607609 env[1773]: time="2024-02-09T19:03:25.606482078Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:03:25.607609 env[1773]: time="2024-02-09T19:03:25.606765978Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:03:25.607609 env[1773]: time="2024-02-09T19:03:25.607173279Z" level=info msg="Daemon has completed initialization" Feb 9 19:03:25.637341 systemd[1]: Started docker.service. Feb 9 19:03:25.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:25.648372 env[1773]: time="2024-02-09T19:03:25.648169009Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:03:25.668225 systemd[1]: Reloading. Feb 9 19:03:25.752798 /usr/lib/systemd/system-generators/torcx-generator[1903]: time="2024-02-09T19:03:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:25.760102 /usr/lib/systemd/system-generators/torcx-generator[1903]: time="2024-02-09T19:03:25Z" level=info msg="torcx already run" Feb 9 19:03:25.846003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:25.846049 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:25.864265 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:25.944848 systemd[1]: Started kubelet.service. Feb 9 19:03:25.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:26.028186 kubelet[1971]: E0209 19:03:26.028114 1971 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:26.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:03:26.029961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:26.030237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:30.021939 env[1412]: time="2024-02-09T19:03:30.021865654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:03:30.661755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755790191.mount: Deactivated successfully. Feb 9 19:03:32.986484 env[1412]: time="2024-02-09T19:03:32.986410770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.996340 env[1412]: time="2024-02-09T19:03:32.996255574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.001577 env[1412]: time="2024-02-09T19:03:33.001513477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.007901 env[1412]: time="2024-02-09T19:03:33.007833880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.008633 env[1412]: time="2024-02-09T19:03:33.008583880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:03:33.021032 env[1412]: time="2024-02-09T19:03:33.020968186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:03:35.068141 env[1412]: time="2024-02-09T19:03:35.068064183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.080143 env[1412]: time="2024-02-09T19:03:35.080072543Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.085167 env[1412]: time="2024-02-09T19:03:35.085110794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.091631 env[1412]: time="2024-02-09T19:03:35.091573088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.092056 env[1412]: time="2024-02-09T19:03:35.091993400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:03:35.106424 env[1412]: time="2024-02-09T19:03:35.106371631Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:03:36.251545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:03:36.267955 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 19:03:36.268146 kernel: audit: type=1130 audit(1707505416.250:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.251859 systemd[1]: Stopped kubelet.service. Feb 9 19:03:36.254029 systemd[1]: Started kubelet.service. Feb 9 19:03:36.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.301396 kernel: audit: type=1131 audit(1707505416.250:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.301590 kernel: audit: type=1130 audit(1707505416.250:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:36.374464 kubelet[2003]: E0209 19:03:36.374382 2003 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:36.379615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:36.379843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:36.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:03:36.396056 kernel: audit: type=1131 audit(1707505416.379:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:03:36.497516 env[1412]: time="2024-02-09T19:03:36.497439524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.506257 env[1412]: time="2024-02-09T19:03:36.505348354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.517367 env[1412]: time="2024-02-09T19:03:36.517298903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.526547 env[1412]: time="2024-02-09T19:03:36.526477670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.527311 env[1412]: time="2024-02-09T19:03:36.527267693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:03:36.541155 env[1412]: time="2024-02-09T19:03:36.541100296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:03:37.698760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908475214.mount: Deactivated successfully. Feb 9 19:03:38.260626 env[1412]: time="2024-02-09T19:03:38.260554897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.270077 env[1412]: time="2024-02-09T19:03:38.270001657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.274218 env[1412]: time="2024-02-09T19:03:38.274174172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.278508 env[1412]: time="2024-02-09T19:03:38.278468890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.278885 env[1412]: time="2024-02-09T19:03:38.278845401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:03:38.291887 env[1412]: time="2024-02-09T19:03:38.291844259Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:03:38.778608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898172679.mount: Deactivated successfully. Feb 9 19:03:38.806324 env[1412]: time="2024-02-09T19:03:38.806259236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.815281 env[1412]: time="2024-02-09T19:03:38.815226783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.820382 env[1412]: time="2024-02-09T19:03:38.820325624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.827155 env[1412]: time="2024-02-09T19:03:38.827099111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:38.827597 env[1412]: time="2024-02-09T19:03:38.827559523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:03:38.839591 env[1412]: time="2024-02-09T19:03:38.839546054Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:03:39.645091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999787484.mount: Deactivated successfully. Feb 9 19:03:44.526290 env[1412]: time="2024-02-09T19:03:44.526212476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.535612 env[1412]: time="2024-02-09T19:03:44.535543594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.543383 env[1412]: time="2024-02-09T19:03:44.543315976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.550169 env[1412]: time="2024-02-09T19:03:44.550106334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.550764 env[1412]: time="2024-02-09T19:03:44.550727949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:03:44.564525 env[1412]: time="2024-02-09T19:03:44.564471070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:03:45.156105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472735470.mount: Deactivated successfully. Feb 9 19:03:45.789398 env[1412]: time="2024-02-09T19:03:45.789321086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:45.800675 env[1412]: time="2024-02-09T19:03:45.800611843Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:45.807897 env[1412]: time="2024-02-09T19:03:45.807838407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:45.821405 env[1412]: time="2024-02-09T19:03:45.821343114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:45.822080 env[1412]: time="2024-02-09T19:03:45.822013029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:03:46.501537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:03:46.501890 systemd[1]: Stopped kubelet.service. Feb 9 19:03:46.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.504284 systemd[1]: Started kubelet.service. Feb 9 19:03:46.519040 kernel: audit: type=1130 audit(1707505426.500:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.550844 kernel: audit: type=1131 audit(1707505426.500:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.550997 kernel: audit: type=1130 audit(1707505426.503:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:46.597306 kubelet[2037]: E0209 19:03:46.597252 2037 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:46.599203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:46.599430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:46.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:03:46.614053 kernel: audit: type=1131 audit(1707505426.598:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:03:48.552933 systemd[1]: Stopped kubelet.service. Feb 9 19:03:48.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.583609 kernel: audit: type=1130 audit(1707505428.552:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.583738 kernel: audit: type=1131 audit(1707505428.552:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.573166 systemd[1]: Reloading. Feb 9 19:03:48.664407 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T19:03:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:48.664449 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T19:03:48Z" level=info msg="torcx already run" Feb 9 19:03:48.760882 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:48.760905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:48.779311 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:48.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.867715 systemd[1]: Started kubelet.service. Feb 9 19:03:48.883042 kernel: audit: type=1130 audit(1707505428.866:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:48.927059 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:48.927429 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:48.927587 kubelet[2187]: I0209 19:03:48.927555 2187 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:48.929123 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:48.929283 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:49.192734 kubelet[2187]: I0209 19:03:49.192156 2187 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:03:49.192734 kubelet[2187]: I0209 19:03:49.192188 2187 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:49.192734 kubelet[2187]: I0209 19:03:49.192525 2187 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:03:49.196042 kubelet[2187]: E0209 19:03:49.196002 2187 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.196258 kubelet[2187]: I0209 19:03:49.196241 2187 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:49.199457 kubelet[2187]: I0209 19:03:49.199435 2187 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:49.199867 kubelet[2187]: I0209 19:03:49.199844 2187 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:49.199967 kubelet[2187]: I0209 19:03:49.199930 2187 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:49.200120 kubelet[2187]: I0209 19:03:49.199968 2187 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:49.200120 kubelet[2187]: I0209 19:03:49.199985 2187 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:03:49.200219 kubelet[2187]: I0209 19:03:49.200127 2187 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:49.203655 kubelet[2187]: I0209 19:03:49.203636 2187 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:03:49.203758 kubelet[2187]: I0209 19:03:49.203662 2187 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:49.203758 kubelet[2187]: I0209 19:03:49.203694 2187 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:03:49.203758 kubelet[2187]: I0209 19:03:49.203723 2187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:49.204515 kubelet[2187]: W0209 19:03:49.204468 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-00ed68a33d&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.204602 kubelet[2187]: E0209 19:03:49.204530 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-00ed68a33d&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.204655 kubelet[2187]: I0209 19:03:49.204630 2187 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:49.204949 kubelet[2187]: W0209 19:03:49.204931 2187 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:03:49.205443 kubelet[2187]: I0209 19:03:49.205417 2187 server.go:1186] "Started kubelet" Feb 9 19:03:49.206000 audit[2187]: AVC avc: denied { mac_admin } for pid=2187 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:49.210826 kubelet[2187]: W0209 19:03:49.210796 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.210927 kubelet[2187]: E0209 19:03:49.210918 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.211110 kubelet[2187]: E0209 19:03:49.211032 2187 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247241fd6964b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 205390923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 205390923, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:49.212176 kubelet[2187]: I0209 19:03:49.212159 2187 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:49.213678 kubelet[2187]: I0209 19:03:49.213663 2187 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:03:49.222488 kernel: audit: type=1400 audit(1707505429.206:192): avc: denied { mac_admin } for pid=2187 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:49.222564 kubelet[2187]: E0209 19:03:49.222341 2187 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:49.222564 kubelet[2187]: E0209 19:03:49.222367 2187 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:49.222925 kubelet[2187]: I0209 19:03:49.222705 2187 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:03:49.222925 kubelet[2187]: I0209 19:03:49.222765 2187 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:03:49.222925 kubelet[2187]: I0209 19:03:49.222857 2187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:49.206000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:49.230031 kernel: audit: type=1401 audit(1707505429.206:192): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:49.231423 kubelet[2187]: I0209 19:03:49.231398 2187 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:03:49.231679 kubelet[2187]: I0209 19:03:49.231657 2187 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:03:49.232452 kubelet[2187]: W0209 19:03:49.231972 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.232452 kubelet[2187]: E0209 19:03:49.232039 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.232452 kubelet[2187]: E0209 19:03:49.232106 2187 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-00ed68a33d?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.206000 audit[2187]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c2b290 a1=c000c22be8 a2=c000c2b260 a3=25 items=0 ppid=1 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.206000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:49.221000 audit[2187]: AVC avc: denied { mac_admin } for pid=2187 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:49.257046 kernel: audit: type=1300 audit(1707505429.206:192): arch=c000003e syscall=188 success=no exit=-22 a0=c000c2b290 a1=c000c22be8 a2=c000c2b260 a3=25 items=0 ppid=1 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.221000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:49.221000 audit[2187]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000265800 a1=c000c22c90 a2=c000c2b440 a3=25 items=0 ppid=1 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.221000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:49.248000 audit[2197]: NETFILTER_CFG table=mangle:30 family=2 entries=2 op=nft_register_chain pid=2197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.248000 audit[2197]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd1ff3be0 a2=0 a3=7ffdd1ff3bcc items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:03:49.255000 audit[2198]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.255000 audit[2198]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc992dced0 a2=0 a3=7ffc992dcebc items=0 ppid=2187 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:03:49.259000 audit[2200]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.259000 audit[2200]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff20a0e830 a2=0 a3=7fff20a0e81c items=0 ppid=2187 pid=2200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:03:49.262000 audit[2202]: NETFILTER_CFG table=filter:33 family=2 entries=2 op=nft_register_chain pid=2202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.262000 audit[2202]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff4a008e90 a2=0 a3=7fff4a008e7c items=0 ppid=2187 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:03:49.311805 kubelet[2187]: I0209 19:03:49.311769 2187 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:49.311805 kubelet[2187]: I0209 19:03:49.311797 2187 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:49.312060 kubelet[2187]: I0209 19:03:49.311820 2187 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:49.316491 kubelet[2187]: I0209 19:03:49.316464 2187 policy_none.go:49] "None policy: Start" Feb 9 19:03:49.317051 kubelet[2187]: I0209 19:03:49.317033 2187 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:49.317126 kubelet[2187]: I0209 19:03:49.317057 2187 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:49.331464 kubelet[2187]: I0209 19:03:49.331440 2187 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:49.330000 audit[2187]: AVC avc: denied { mac_admin } for pid=2187 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:49.330000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:49.330000 audit[2187]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011578c0 a1=c001107f50 a2=c001157890 a3=25 items=0 ppid=1 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.330000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:49.331805 kubelet[2187]: I0209 19:03:49.331537 2187 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:03:49.331805 kubelet[2187]: I0209 19:03:49.331723 2187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:49.332000 audit[2209]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.332000 audit[2209]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffddee09740 a2=0 a3=7ffddee0972c items=0 ppid=2187 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:03:49.334000 audit[2210]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.334000 audit[2210]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb7654ab0 a2=0 a3=7ffeb7654a9c items=0 ppid=2187 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:03:49.335580 kubelet[2187]: E0209 19:03:49.335561 2187 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-00ed68a33d\" not found" Feb 9 19:03:49.341859 kubelet[2187]: I0209 19:03:49.341845 2187 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.342294 kubelet[2187]: E0209 19:03:49.342281 2187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.354000 audit[2213]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.354000 audit[2213]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff9b3d5920 a2=0 a3=7fff9b3d590c items=0 ppid=2187 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:03:49.411000 audit[2216]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.411000 audit[2216]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff67aaf460 a2=0 a3=7fff67aaf44c items=0 ppid=2187 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:03:49.412000 audit[2217]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.412000 audit[2217]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbf214bc0 a2=0 a3=7fffbf214bac items=0 ppid=2187 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:03:49.414000 audit[2218]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.414000 audit[2218]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6cedebb0 a2=0 a3=7ffc6cedeb9c items=0 ppid=2187 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:03:49.416000 audit[2220]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.416000 audit[2220]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdbb78c790 a2=0 a3=7ffdbb78c77c items=0 ppid=2187 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:03:49.418000 audit[2222]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_rule pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.418000 audit[2222]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc320224f0 a2=0 a3=7ffc320224dc items=0 ppid=2187 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:03:49.420000 audit[2224]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.420000 audit[2224]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd222a79f0 a2=0 a3=7ffd222a79dc items=0 ppid=2187 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:03:49.422000 audit[2226]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.422000 audit[2226]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff9c75efc0 a2=0 a3=7fff9c75efac items=0 ppid=2187 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:03:49.425000 audit[2228]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_rule pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.425000 audit[2228]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff5871c3a0 a2=0 a3=7fff5871c38c items=0 ppid=2187 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:03:49.426574 kubelet[2187]: I0209 19:03:49.426550 2187 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:49.426000 audit[2229]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=2229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.426000 audit[2229]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd824f910 a2=0 a3=7fffd824f8fc items=0 ppid=2187 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:03:49.426000 audit[2230]: NETFILTER_CFG table=mangle:46 family=2 entries=1 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.426000 audit[2230]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff36d663d0 a2=0 a3=7fff36d663bc items=0 ppid=2187 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:03:49.427000 audit[2231]: NETFILTER_CFG table=nat:47 family=10 entries=2 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.427000 audit[2231]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffed00e9d80 a2=0 a3=7ffed00e9d6c items=0 ppid=2187 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:03:49.428000 audit[2232]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.428000 audit[2232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7ddb0550 a2=0 a3=7fff7ddb053c items=0 ppid=2187 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:03:49.430000 audit[2234]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:03:49.430000 audit[2234]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc58e85870 a2=0 a3=7ffc58e8585c items=0 ppid=2187 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.430000 audit[2235]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_rule pid=2235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.430000 audit[2235]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffda57ccde0 a2=0 a3=7ffda57ccdcc items=0 ppid=2187 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:03:49.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:03:49.431000 audit[2236]: NETFILTER_CFG table=filter:51 family=10 entries=2 op=nft_register_chain pid=2236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.431000 audit[2236]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff4222d100 a2=0 a3=7fff4222d0ec items=0 ppid=2187 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:03:49.432767 kubelet[2187]: E0209 19:03:49.432738 2187 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-00ed68a33d?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.433000 audit[2238]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.433000 audit[2238]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffffc5e7990 a2=0 a3=7ffffc5e797c items=0 ppid=2187 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:03:49.434000 audit[2239]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=2239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.434000 audit[2239]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9c18bc50 a2=0 a3=7ffc9c18bc3c items=0 ppid=2187 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:03:49.435000 audit[2240]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=2240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.435000 audit[2240]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe18048900 a2=0 a3=7ffe180488ec items=0 ppid=2187 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:03:49.437000 audit[2242]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.437000 audit[2242]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc035fcf60 a2=0 a3=7ffc035fcf4c items=0 ppid=2187 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:03:49.439000 audit[2244]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.439000 audit[2244]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffffe4674b0 a2=0 a3=7ffffe46749c items=0 ppid=2187 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:03:49.441000 audit[2246]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_rule pid=2246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.441000 audit[2246]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff1f31b180 a2=0 a3=7fff1f31b16c items=0 ppid=2187 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:03:49.447000 audit[2248]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_rule pid=2248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.447000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffca436cc80 a2=0 a3=7ffca436cc6c items=0 ppid=2187 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:03:49.465000 audit[2250]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_rule pid=2250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.465000 audit[2250]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffc062d7990 a2=0 a3=7ffc062d797c items=0 ppid=2187 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:03:49.467005 kubelet[2187]: I0209 19:03:49.466972 2187 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:49.467100 kubelet[2187]: I0209 19:03:49.467009 2187 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:03:49.467100 kubelet[2187]: I0209 19:03:49.467049 2187 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:03:49.467203 kubelet[2187]: E0209 19:03:49.467108 2187 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:03:49.467891 kubelet[2187]: W0209 19:03:49.467861 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.467983 kubelet[2187]: E0209 19:03:49.467904 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.468000 audit[2251]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.468000 audit[2251]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe292d01e0 a2=0 a3=7ffe292d01cc items=0 ppid=2187 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:03:49.469000 audit[2252]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.469000 audit[2252]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5feff960 a2=0 a3=7fff5feff94c items=0 ppid=2187 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:03:49.470000 audit[2253]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:03:49.470000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4fd3a880 a2=0 a3=7ffd4fd3a86c items=0 ppid=2187 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:49.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:03:49.544509 kubelet[2187]: I0209 19:03:49.544481 2187 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.545000 kubelet[2187]: E0209 19:03:49.544973 2187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.567232 kubelet[2187]: I0209 19:03:49.567198 2187 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:49.568826 kubelet[2187]: I0209 19:03:49.568802 2187 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:49.572869 kubelet[2187]: I0209 19:03:49.572843 2187 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:49.573910 kubelet[2187]: I0209 19:03:49.573887 2187 status_manager.go:698] "Failed to get status for pod" podUID=0d61aea9025b9573fa7232d9b4a47357 pod="kube-system/kube-scheduler-ci-3510.3.2-a-00ed68a33d" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-00ed68a33d\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 9 19:03:49.581631 kubelet[2187]: I0209 19:03:49.581612 2187 status_manager.go:698] "Failed to get status for pod" podUID=f23bb01936aabb8aca4745fdb21d530c pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-00ed68a33d\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 9 19:03:49.582193 kubelet[2187]: I0209 19:03:49.582171 2187 status_manager.go:698] "Failed to get status for pod" podUID=ab129e358fe0fb438fd54bfc1858a14a pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-00ed68a33d\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 9 19:03:49.634682 kubelet[2187]: I0209 19:03:49.634605 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.634952 kubelet[2187]: I0209 19:03:49.634725 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.634952 kubelet[2187]: I0209 19:03:49.634818 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.634952 kubelet[2187]: I0209 19:03:49.634916 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d61aea9025b9573fa7232d9b4a47357-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-00ed68a33d\" (UID: \"0d61aea9025b9573fa7232d9b4a47357\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.635189 kubelet[2187]: I0209 19:03:49.635034 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.635189 kubelet[2187]: I0209 19:03:49.635078 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.635189 kubelet[2187]: I0209 19:03:49.635117 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.635189 kubelet[2187]: I0209 19:03:49.635154 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.635369 kubelet[2187]: I0209 19:03:49.635201 2187 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.834084 kubelet[2187]: E0209 19:03:49.833988 2187 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-00ed68a33d?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:49.874572 env[1412]: time="2024-02-09T19:03:49.874507885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-00ed68a33d,Uid:0d61aea9025b9573fa7232d9b4a47357,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:49.881355 env[1412]: time="2024-02-09T19:03:49.881175721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-00ed68a33d,Uid:f23bb01936aabb8aca4745fdb21d530c,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:49.882509 env[1412]: time="2024-02-09T19:03:49.882469648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-00ed68a33d,Uid:ab129e358fe0fb438fd54bfc1858a14a,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:49.947214 kubelet[2187]: I0209 19:03:49.947176 2187 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:49.947840 kubelet[2187]: E0209 19:03:49.947750 2187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:50.398490 kubelet[2187]: W0209 19:03:50.398417 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-00ed68a33d&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.398490 kubelet[2187]: E0209 19:03:50.398489 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-00ed68a33d&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.546663 kubelet[2187]: W0209 19:03:50.546606 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.546663 kubelet[2187]: E0209 19:03:50.546665 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.609209 kubelet[2187]: W0209 19:03:50.609134 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.609209 kubelet[2187]: E0209 19:03:50.609208 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.634797 kubelet[2187]: E0209 19:03:50.634749 2187 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-00ed68a33d?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.750962 kubelet[2187]: I0209 19:03:50.750458 2187 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:50.751262 kubelet[2187]: E0209 19:03:50.751241 2187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:50.929152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934591568.mount: Deactivated successfully. Feb 9 19:03:50.967100 env[1412]: time="2024-02-09T19:03:50.967040477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:50.971835 env[1412]: time="2024-02-09T19:03:50.971788871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:50.981610 kubelet[2187]: W0209 19:03:50.981574 2187 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.981938 kubelet[2187]: E0209 19:03:50.981619 2187 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:50.994486 env[1412]: time="2024-02-09T19:03:50.994432221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:50.999233 env[1412]: time="2024-02-09T19:03:50.999193316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.004564 env[1412]: time="2024-02-09T19:03:51.004452219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.012583 env[1412]: time="2024-02-09T19:03:51.012543276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.017504 env[1412]: time="2024-02-09T19:03:51.017469171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.024367 env[1412]: time="2024-02-09T19:03:51.024332404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.028995 env[1412]: time="2024-02-09T19:03:51.028967594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.044221 env[1412]: time="2024-02-09T19:03:51.044191589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.048708 env[1412]: time="2024-02-09T19:03:51.048675675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.051982 env[1412]: time="2024-02-09T19:03:51.051949639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:51.145501 env[1412]: time="2024-02-09T19:03:51.140379651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:51.145501 env[1412]: time="2024-02-09T19:03:51.140428052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:51.145501 env[1412]: time="2024-02-09T19:03:51.140441553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:51.145501 env[1412]: time="2024-02-09T19:03:51.145193045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854 pid=2262 runtime=io.containerd.runc.v2 Feb 9 19:03:51.161215 env[1412]: time="2024-02-09T19:03:51.161146854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:51.161403 env[1412]: time="2024-02-09T19:03:51.161222455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:51.161403 env[1412]: time="2024-02-09T19:03:51.161249256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:51.161857 env[1412]: time="2024-02-09T19:03:51.161543361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0561bfabf0b98200112e13a4dd03a44f19797ad79f7867c64911e56533be242 pid=2282 runtime=io.containerd.runc.v2 Feb 9 19:03:51.172949 env[1412]: time="2024-02-09T19:03:51.172863881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:51.173111 env[1412]: time="2024-02-09T19:03:51.172929982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:51.173111 env[1412]: time="2024-02-09T19:03:51.172945282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:51.173247 env[1412]: time="2024-02-09T19:03:51.173128586Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65 pid=2306 runtime=io.containerd.runc.v2 Feb 9 19:03:51.299319 env[1412]: time="2024-02-09T19:03:51.299167227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-00ed68a33d,Uid:ab129e358fe0fb438fd54bfc1858a14a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854\"" Feb 9 19:03:51.307442 env[1412]: time="2024-02-09T19:03:51.307400686Z" level=info msg="CreateContainer within sandbox \"2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:03:51.311185 env[1412]: time="2024-02-09T19:03:51.311142459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-00ed68a33d,Uid:0d61aea9025b9573fa7232d9b4a47357,Namespace:kube-system,Attempt:0,} returns sandbox id \"f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65\"" Feb 9 19:03:51.315097 kubelet[2187]: E0209 19:03:51.315062 2187 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 9 19:03:51.317986 env[1412]: time="2024-02-09T19:03:51.317954190Z" level=info msg="CreateContainer within sandbox \"f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:03:51.321879 env[1412]: time="2024-02-09T19:03:51.321272655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-00ed68a33d,Uid:f23bb01936aabb8aca4745fdb21d530c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0561bfabf0b98200112e13a4dd03a44f19797ad79f7867c64911e56533be242\"" Feb 9 19:03:51.326468 env[1412]: time="2024-02-09T19:03:51.326438155Z" level=info msg="CreateContainer within sandbox \"f0561bfabf0b98200112e13a4dd03a44f19797ad79f7867c64911e56533be242\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:03:51.406880 env[1412]: time="2024-02-09T19:03:51.406819412Z" level=info msg="CreateContainer within sandbox \"2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3\"" Feb 9 19:03:51.407711 env[1412]: time="2024-02-09T19:03:51.407670228Z" level=info msg="StartContainer for \"f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3\"" Feb 9 19:03:51.421427 env[1412]: time="2024-02-09T19:03:51.421382994Z" level=info msg="CreateContainer within sandbox \"f0561bfabf0b98200112e13a4dd03a44f19797ad79f7867c64911e56533be242\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6bbef9504d43d5b225144a6eac335ad2128b88cc1f6683763b325d852197ca46\"" Feb 9 19:03:51.422265 env[1412]: time="2024-02-09T19:03:51.422230610Z" level=info msg="StartContainer for \"6bbef9504d43d5b225144a6eac335ad2128b88cc1f6683763b325d852197ca46\"" Feb 9 19:03:51.432156 env[1412]: time="2024-02-09T19:03:51.432072001Z" level=info msg="CreateContainer within sandbox \"f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9\"" Feb 9 19:03:51.438057 env[1412]: time="2024-02-09T19:03:51.437978615Z" level=info msg="StartContainer for \"72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9\"" Feb 9 19:03:51.582062 env[1412]: time="2024-02-09T19:03:51.581897702Z" level=info msg="StartContainer for \"6bbef9504d43d5b225144a6eac335ad2128b88cc1f6683763b325d852197ca46\" returns successfully" Feb 9 19:03:51.590581 env[1412]: time="2024-02-09T19:03:51.590525669Z" level=info msg="StartContainer for \"f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3\" returns successfully" Feb 9 19:03:51.666879 env[1412]: time="2024-02-09T19:03:51.666798947Z" level=info msg="StartContainer for \"72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9\" returns successfully" Feb 9 19:03:52.354420 kubelet[2187]: I0209 19:03:52.354383 2187 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:54.205729 kubelet[2187]: I0209 19:03:54.205670 2187 apiserver.go:52] "Watching apiserver" Feb 9 19:03:54.314434 kubelet[2187]: I0209 19:03:54.314386 2187 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:54.331817 kubelet[2187]: I0209 19:03:54.331778 2187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:03:54.355817 kubelet[2187]: E0209 19:03:54.355648 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247241fd6964b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 205390923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 205390923, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.375742 kubelet[2187]: I0209 19:03:54.375693 2187 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:54.417263 kubelet[2187]: E0209 19:03:54.417077 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b2472420d96eed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 222354669, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 222354669, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.473072 kubelet[2187]: E0209 19:03:54.472781 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b2472426218042", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310963778, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310963778, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.522672 kubelet[2187]: E0209 19:03:54.522624 2187 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:54.523600 kubelet[2187]: E0209 19:03:54.523570 2187 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:54.524031 kubelet[2187]: E0209 19:03:54.524003 2187 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-00ed68a33d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:54.527139 kubelet[2187]: E0209 19:03:54.527039 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247242621b62b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310977579, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310977579, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.582090 kubelet[2187]: E0209 19:03:54.581944 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247242621c9b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310982579, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310982579, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.636257 kubelet[2187]: E0209 19:03:54.636113 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b2472427706fcf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 332914127, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 332914127, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.694398 kubelet[2187]: E0209 19:03:54.694263 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b2472426218042", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310963778, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 341725707, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.750468 kubelet[2187]: E0209 19:03:54.750243 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247242621b62b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310977579, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 341732607, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:54.807705 kubelet[2187]: E0209 19:03:54.807557 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247242621c9b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310982579, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 341761207, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:55.011773 kubelet[2187]: E0209 19:03:55.011632 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b2472426218042", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310963778, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 544423645, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:55.410926 kubelet[2187]: E0209 19:03:55.410691 2187 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-00ed68a33d.17b247242621b62b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-00ed68a33d", UID:"ci-3510.3.2-a-00ed68a33d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-00ed68a33d status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 310977579, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 49, 544437246, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.726618 systemd[1]: Reloading. Feb 9 19:03:57.810754 /usr/lib/systemd/system-generators/torcx-generator[2514]: time="2024-02-09T19:03:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:57.810798 /usr/lib/systemd/system-generators/torcx-generator[2514]: time="2024-02-09T19:03:57Z" level=info msg="torcx already run" Feb 9 19:03:57.945485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:57.945510 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:57.969614 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:58.071825 kubelet[2187]: I0209 19:03:58.071788 2187 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:58.072280 systemd[1]: Stopping kubelet.service... Feb 9 19:03:58.086674 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:03:58.087324 systemd[1]: Stopped kubelet.service. Feb 9 19:03:58.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:58.092507 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 9 19:03:58.092610 kernel: audit: type=1131 audit(1707505438.086:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:58.096767 systemd[1]: Started kubelet.service. Feb 9 19:03:58.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:58.131060 kernel: audit: type=1130 audit(1707505438.092:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:58.210734 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:58.211237 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:58.211237 kubelet[2584]: I0209 19:03:58.211156 2584 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:58.212616 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:58.212616 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:58.219265 kubelet[2584]: I0209 19:03:58.219232 2584 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:03:58.219265 kubelet[2584]: I0209 19:03:58.219260 2584 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:58.219589 kubelet[2584]: I0209 19:03:58.219569 2584 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:03:58.220986 kubelet[2584]: I0209 19:03:58.220958 2584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:03:58.221973 kubelet[2584]: I0209 19:03:58.221953 2584 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:58.226927 kubelet[2584]: I0209 19:03:58.226905 2584 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:58.227487 kubelet[2584]: I0209 19:03:58.227470 2584 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:58.227583 kubelet[2584]: I0209 19:03:58.227571 2584 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:58.227717 kubelet[2584]: I0209 19:03:58.227599 2584 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:58.227717 kubelet[2584]: I0209 19:03:58.227616 2584 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:03:58.227717 kubelet[2584]: I0209 19:03:58.227673 2584 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:58.236792 kubelet[2584]: I0209 19:03:58.236770 2584 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:03:58.237273 kubelet[2584]: I0209 19:03:58.237256 2584 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:58.237419 kubelet[2584]: I0209 19:03:58.237407 2584 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:03:58.237520 kubelet[2584]: I0209 19:03:58.237509 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:58.246126 kubelet[2584]: I0209 19:03:58.246100 2584 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:58.246957 kubelet[2584]: I0209 19:03:58.246939 2584 server.go:1186] "Started kubelet" Feb 9 19:03:58.248000 audit[2584]: AVC avc: denied { mac_admin } for pid=2584 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:58.255248 kubelet[2584]: I0209 19:03:58.249629 2584 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:03:58.255248 kubelet[2584]: I0209 19:03:58.249661 2584 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:03:58.255248 kubelet[2584]: I0209 19:03:58.249688 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:58.255248 kubelet[2584]: E0209 19:03:58.250726 2584 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:58.255248 kubelet[2584]: E0209 19:03:58.250747 2584 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:58.268064 kernel: audit: type=1400 audit(1707505438.248:230): avc: denied { mac_admin } for pid=2584 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:58.268242 kernel: audit: type=1401 audit(1707505438.248:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:58.248000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:58.276648 kubelet[2584]: I0209 19:03:58.276618 2584 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:58.277936 kubelet[2584]: I0209 19:03:58.277911 2584 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:03:58.282394 kubelet[2584]: I0209 19:03:58.278557 2584 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:03:58.248000 audit[2584]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00092e150 a1=c00077c4c8 a2=c00092e120 a3=25 items=0 ppid=1 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:58.301890 kubelet[2584]: I0209 19:03:58.278585 2584 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:03:58.307687 kernel: audit: type=1300 audit(1707505438.248:230): arch=c000003e syscall=188 success=no exit=-22 a0=c00092e150 a1=c00077c4c8 a2=c00092e120 a3=25 items=0 ppid=1 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:58.248000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:58.331554 kernel: audit: type=1327 audit(1707505438.248:230): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:58.248000 audit[2584]: AVC avc: denied { mac_admin } for pid=2584 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:58.348432 kubelet[2584]: I0209 19:03:58.339942 2584 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:58.349289 kernel: audit: type=1400 audit(1707505438.248:231): avc: denied { mac_admin } for pid=2584 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:58.248000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:58.360262 kernel: audit: type=1401 audit(1707505438.248:231): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:58.248000 audit[2584]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0004c55c0 a1=c00077c4e0 a2=c00092e1e0 a3=25 items=0 ppid=1 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:58.378097 kubelet[2584]: I0209 19:03:58.369998 2584 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:58.378097 kubelet[2584]: I0209 19:03:58.370039 2584 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:03:58.378097 kubelet[2584]: I0209 19:03:58.370065 2584 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:03:58.378097 kubelet[2584]: E0209 19:03:58.370121 2584 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:03:58.388269 kernel: audit: type=1300 audit(1707505438.248:231): arch=c000003e syscall=188 success=no exit=-22 a0=c0004c55c0 a1=c00077c4e0 a2=c00092e1e0 a3=25 items=0 ppid=1 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:58.388547 kernel: audit: type=1327 audit(1707505438.248:231): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:58.248000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:58.408646 kubelet[2584]: I0209 19:03:58.408610 2584 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:58.436319 kubelet[2584]: I0209 19:03:58.436273 2584 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:58.436725 kubelet[2584]: I0209 19:03:58.436708 2584 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:58.470240 kubelet[2584]: E0209 19:03:58.470178 2584 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:03:58.495604 kubelet[2584]: I0209 19:03:58.495570 2584 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:58.495900 kubelet[2584]: I0209 19:03:58.495881 2584 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:58.496085 kubelet[2584]: I0209 19:03:58.496064 2584 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:58.671407 kubelet[2584]: E0209 19:03:58.671199 2584 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:03:58.846552 kubelet[2584]: I0209 19:03:58.846494 2584 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:03:58.846552 kubelet[2584]: I0209 19:03:58.846569 2584 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:03:58.846829 kubelet[2584]: I0209 19:03:58.846581 2584 policy_none.go:49] "None policy: Start" Feb 9 19:03:58.848214 kubelet[2584]: I0209 19:03:58.848183 2584 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:58.848214 kubelet[2584]: I0209 19:03:58.848217 2584 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:58.848422 kubelet[2584]: I0209 19:03:58.848409 2584 state_mem.go:75] "Updated machine memory state" Feb 9 19:03:58.849771 kubelet[2584]: I0209 19:03:58.849745 2584 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:58.849885 kubelet[2584]: I0209 19:03:58.849831 2584 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:03:58.848000 audit[2584]: AVC avc: denied { mac_admin } for pid=2584 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:03:58.848000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:03:58.848000 audit[2584]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f511d0 a1=c000f4e918 a2=c000f511a0 a3=25 items=0 ppid=1 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:58.848000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:03:58.852976 kubelet[2584]: I0209 19:03:58.852948 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:59.072456 kubelet[2584]: I0209 19:03:59.072380 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:59.072716 kubelet[2584]: I0209 19:03:59.072568 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:59.072716 kubelet[2584]: I0209 19:03:59.072623 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:59.108757 kubelet[2584]: I0209 19:03:59.108696 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109015 kubelet[2584]: I0209 19:03:59.108778 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109015 kubelet[2584]: I0209 19:03:59.108808 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109015 kubelet[2584]: I0209 19:03:59.108842 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f23bb01936aabb8aca4745fdb21d530c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-00ed68a33d\" (UID: \"f23bb01936aabb8aca4745fdb21d530c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109015 kubelet[2584]: I0209 19:03:59.108875 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109015 kubelet[2584]: I0209 19:03:59.108907 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109227 kubelet[2584]: I0209 19:03:59.108934 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109227 kubelet[2584]: I0209 19:03:59.108967 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab129e358fe0fb438fd54bfc1858a14a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-00ed68a33d\" (UID: \"ab129e358fe0fb438fd54bfc1858a14a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.109227 kubelet[2584]: I0209 19:03:59.108998 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d61aea9025b9573fa7232d9b4a47357-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-00ed68a33d\" (UID: \"0d61aea9025b9573fa7232d9b4a47357\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-00ed68a33d" Feb 9 19:03:59.244971 kubelet[2584]: I0209 19:03:59.244901 2584 apiserver.go:52] "Watching apiserver" Feb 9 19:03:59.302608 kubelet[2584]: I0209 19:03:59.302552 2584 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:03:59.310796 kubelet[2584]: I0209 19:03:59.310731 2584 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:04:00.056044 kubelet[2584]: I0209 19:04:00.055975 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-00ed68a33d" podStartSLOduration=1.055888015 pod.CreationTimestamp="2024-02-09 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:59.752690455 +0000 UTC m=+1.636689067" watchObservedRunningTime="2024-02-09 19:04:00.055888015 +0000 UTC m=+1.939886727" Feb 9 19:04:00.483448 kubelet[2584]: I0209 19:04:00.483281 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-00ed68a33d" podStartSLOduration=1.483115486 pod.CreationTimestamp="2024-02-09 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:00.056839029 +0000 UTC m=+1.940837641" watchObservedRunningTime="2024-02-09 19:04:00.483115486 +0000 UTC m=+2.367114198" Feb 9 19:04:04.102186 sudo[1758]: pam_unix(sudo:session): session closed for user root Feb 9 19:04:04.101000 audit[1758]: USER_END pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:04:04.106913 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:04:04.107043 kernel: audit: type=1106 audit(1707505444.101:233): pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:04:04.101000 audit[1758]: CRED_DISP pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:04:04.138958 kernel: audit: type=1104 audit(1707505444.101:234): pid=1758 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:04:04.205622 sshd[1754]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:04.206000 audit[1754]: USER_END pid=1754 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:04:04.209934 systemd[1]: sshd@6-10.200.8.37:22-10.200.12.6:36890.service: Deactivated successfully. Feb 9 19:04:04.211041 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:04:04.219318 systemd-logind[1374]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:04:04.220761 systemd-logind[1374]: Removed session 9. Feb 9 19:04:04.227069 kernel: audit: type=1106 audit(1707505444.206:235): pid=1754 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:04:04.227221 kernel: audit: type=1104 audit(1707505444.206:236): pid=1754 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:04:04.206000 audit[1754]: CRED_DISP pid=1754 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:04:04.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.37:22-10.200.12.6:36890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:04.257715 kernel: audit: type=1131 audit(1707505444.206:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.37:22-10.200.12.6:36890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:06.240249 kubelet[2584]: I0209 19:04:06.240204 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-00ed68a33d" podStartSLOduration=7.24016242 pod.CreationTimestamp="2024-02-09 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:00.483527393 +0000 UTC m=+2.367526005" watchObservedRunningTime="2024-02-09 19:04:06.24016242 +0000 UTC m=+8.124161132" Feb 9 19:04:10.053375 kubelet[2584]: I0209 19:04:10.053341 2584 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:04:10.054608 env[1412]: time="2024-02-09T19:04:10.054559531Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:04:10.055356 kubelet[2584]: I0209 19:04:10.055332 2584 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:04:10.062373 kubelet[2584]: I0209 19:04:10.062350 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:10.075319 kubelet[2584]: I0209 19:04:10.075291 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1061e09c-cca8-4e92-9ad7-2bd36d692340-kube-proxy\") pod \"kube-proxy-qkkfr\" (UID: \"1061e09c-cca8-4e92-9ad7-2bd36d692340\") " pod="kube-system/kube-proxy-qkkfr" Feb 9 19:04:10.075431 kubelet[2584]: I0209 19:04:10.075338 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1061e09c-cca8-4e92-9ad7-2bd36d692340-xtables-lock\") pod \"kube-proxy-qkkfr\" (UID: \"1061e09c-cca8-4e92-9ad7-2bd36d692340\") " pod="kube-system/kube-proxy-qkkfr" Feb 9 19:04:10.075431 kubelet[2584]: I0209 19:04:10.075368 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1061e09c-cca8-4e92-9ad7-2bd36d692340-lib-modules\") pod \"kube-proxy-qkkfr\" (UID: \"1061e09c-cca8-4e92-9ad7-2bd36d692340\") " pod="kube-system/kube-proxy-qkkfr" Feb 9 19:04:10.075431 kubelet[2584]: I0209 19:04:10.075400 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbhr\" (UniqueName: \"kubernetes.io/projected/1061e09c-cca8-4e92-9ad7-2bd36d692340-kube-api-access-txbhr\") pod \"kube-proxy-qkkfr\" (UID: \"1061e09c-cca8-4e92-9ad7-2bd36d692340\") " pod="kube-system/kube-proxy-qkkfr" Feb 9 19:04:10.291057 kubelet[2584]: I0209 19:04:10.291003 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:10.369950 env[1412]: time="2024-02-09T19:04:10.369267340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkkfr,Uid:1061e09c-cca8-4e92-9ad7-2bd36d692340,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:10.377039 kubelet[2584]: I0209 19:04:10.376991 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d58a6a5d-5655-43c2-9be8-2a8403339433-var-lib-calico\") pod \"tigera-operator-cfc98749c-m4qqg\" (UID: \"d58a6a5d-5655-43c2-9be8-2a8403339433\") " pod="tigera-operator/tigera-operator-cfc98749c-m4qqg" Feb 9 19:04:10.377192 kubelet[2584]: I0209 19:04:10.377063 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnmn\" (UniqueName: \"kubernetes.io/projected/d58a6a5d-5655-43c2-9be8-2a8403339433-kube-api-access-cmnmn\") pod \"tigera-operator-cfc98749c-m4qqg\" (UID: \"d58a6a5d-5655-43c2-9be8-2a8403339433\") " pod="tigera-operator/tigera-operator-cfc98749c-m4qqg" Feb 9 19:04:10.415319 env[1412]: time="2024-02-09T19:04:10.415250397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:10.415525 env[1412]: time="2024-02-09T19:04:10.415290197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:10.415525 env[1412]: time="2024-02-09T19:04:10.415304097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:10.415720 env[1412]: time="2024-02-09T19:04:10.415675002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfe746080ec44ccae3e8ac5c456d3ee6107a61007f8abfe374e0e9d1da3fac86 pid=2691 runtime=io.containerd.runc.v2 Feb 9 19:04:10.466428 env[1412]: time="2024-02-09T19:04:10.466387016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkkfr,Uid:1061e09c-cca8-4e92-9ad7-2bd36d692340,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfe746080ec44ccae3e8ac5c456d3ee6107a61007f8abfe374e0e9d1da3fac86\"" Feb 9 19:04:10.470823 env[1412]: time="2024-02-09T19:04:10.470787869Z" level=info msg="CreateContainer within sandbox \"cfe746080ec44ccae3e8ac5c456d3ee6107a61007f8abfe374e0e9d1da3fac86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:04:10.549175 env[1412]: time="2024-02-09T19:04:10.549068116Z" level=info msg="CreateContainer within sandbox \"cfe746080ec44ccae3e8ac5c456d3ee6107a61007f8abfe374e0e9d1da3fac86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f3ff7e6976194f1270e859b03f55bf4271305c5d56bcdf9f49fe597c33abfa4f\"" Feb 9 19:04:10.551566 env[1412]: time="2024-02-09T19:04:10.549920127Z" level=info msg="StartContainer for \"f3ff7e6976194f1270e859b03f55bf4271305c5d56bcdf9f49fe597c33abfa4f\"" Feb 9 19:04:10.634478 env[1412]: time="2024-02-09T19:04:10.631000208Z" level=info msg="StartContainer for \"f3ff7e6976194f1270e859b03f55bf4271305c5d56bcdf9f49fe597c33abfa4f\" returns successfully" Feb 9 19:04:10.680000 audit[2782]: NETFILTER_CFG table=mangle:63 family=10 entries=1 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.680000 audit[2782]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0eacb790 a2=0 a3=7ffc0eacb77c items=0 ppid=2744 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.712697 kernel: audit: type=1325 audit(1707505450.680:238): table=mangle:63 family=10 entries=1 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.712806 kernel: audit: type=1300 audit(1707505450.680:238): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0eacb790 a2=0 a3=7ffc0eacb77c items=0 ppid=2744 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.712831 kernel: audit: type=1327 audit(1707505450.680:238): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:04:10.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:04:10.724051 kernel: audit: type=1325 audit(1707505450.683:239): table=mangle:64 family=2 entries=1 op=nft_register_chain pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.683000 audit[2783]: NETFILTER_CFG table=mangle:64 family=2 entries=1 op=nft_register_chain pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.734072 kernel: audit: type=1300 audit(1707505450.683:239): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb410ae20 a2=0 a3=7ffdb410ae0c items=0 ppid=2744 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.683000 audit[2783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb410ae20 a2=0 a3=7ffdb410ae0c items=0 ppid=2744 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.754263 kernel: audit: type=1327 audit(1707505450.683:239): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:04:10.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:04:10.685000 audit[2784]: NETFILTER_CFG table=nat:65 family=10 entries=1 op=nft_register_chain pid=2784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.773506 kernel: audit: type=1325 audit(1707505450.685:240): table=nat:65 family=10 entries=1 op=nft_register_chain pid=2784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.685000 audit[2784]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc64fe5ff0 a2=0 a3=7ffc64fe5fdc items=0 ppid=2744 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.793692 kernel: audit: type=1300 audit(1707505450.685:240): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc64fe5ff0 a2=0 a3=7ffc64fe5fdc items=0 ppid=2744 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.797293 kernel: audit: type=1327 audit(1707505450.685:240): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:04:10.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:04:10.691000 audit[2785]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_chain pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.691000 audit[2785]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd7b19760 a2=0 a3=7ffdd7b1974c items=0 ppid=2744 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.818229 kernel: audit: type=1325 audit(1707505450.691:241): table=nat:66 family=2 entries=1 op=nft_register_chain pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:04:10.692000 audit[2786]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.692000 audit[2786]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc7e48df0 a2=0 a3=7ffdc7e48ddc items=0 ppid=2744 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:04:10.696000 audit[2787]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.696000 audit[2787]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee21496a0 a2=0 a3=7ffee214968c items=0 ppid=2744 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:04:10.793000 audit[2788]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.793000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffa2d79340 a2=0 a3=7fffa2d7932c items=0 ppid=2744 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:04:10.799000 audit[2790]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=2790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.799000 audit[2790]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc928d5c50 a2=0 a3=7ffc928d5c3c items=0 ppid=2744 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:04:10.811000 audit[2794]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.811000 audit[2794]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffb06ba190 a2=0 a3=7fffb06ba17c items=0 ppid=2744 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:04:10.816000 audit[2795]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_chain pid=2795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.816000 audit[2795]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe259729c0 a2=0 a3=7ffe259729ac items=0 ppid=2744 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:04:10.819000 audit[2797]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.819000 audit[2797]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1d38e760 a2=0 a3=7fff1d38e74c items=0 ppid=2744 pid=2797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:04:10.821000 audit[2798]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_chain pid=2798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.821000 audit[2798]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb5609a90 a2=0 a3=7fffb5609a7c items=0 ppid=2744 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:04:10.823000 audit[2800]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=2800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.823000 audit[2800]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb0004e00 a2=0 a3=7ffdb0004dec items=0 ppid=2744 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:04:10.827000 audit[2803]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.827000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeaaa398f0 a2=0 a3=7ffeaaa398dc items=0 ppid=2744 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:04:10.828000 audit[2804]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_chain pid=2804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.828000 audit[2804]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9af98a70 a2=0 a3=7ffc9af98a5c items=0 ppid=2744 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:04:10.830000 audit[2806]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.830000 audit[2806]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff40567480 a2=0 a3=7fff4056746c items=0 ppid=2744 pid=2806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:04:10.831000 audit[2807]: NETFILTER_CFG table=filter:79 family=2 entries=1 op=nft_register_chain pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.831000 audit[2807]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe79ce1b20 a2=0 a3=7ffe79ce1b0c items=0 ppid=2744 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:04:10.834000 audit[2809]: NETFILTER_CFG table=filter:80 family=2 entries=1 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.834000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca796b7f0 a2=0 a3=7ffca796b7dc items=0 ppid=2744 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:04:10.838000 audit[2812]: NETFILTER_CFG table=filter:81 family=2 entries=1 op=nft_register_rule pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.838000 audit[2812]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd02dbbee0 a2=0 a3=7ffd02dbbecc items=0 ppid=2744 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:04:10.841000 audit[2815]: NETFILTER_CFG table=filter:82 family=2 entries=1 op=nft_register_rule pid=2815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.841000 audit[2815]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde36be940 a2=0 a3=7ffde36be92c items=0 ppid=2744 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:04:10.842000 audit[2816]: NETFILTER_CFG table=nat:83 family=2 entries=1 op=nft_register_chain pid=2816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.842000 audit[2816]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf2a63110 a2=0 a3=7ffcf2a630fc items=0 ppid=2744 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:04:10.845000 audit[2818]: NETFILTER_CFG table=nat:84 family=2 entries=1 op=nft_register_rule pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.845000 audit[2818]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc5f3eb730 a2=0 a3=7ffc5f3eb71c items=0 ppid=2744 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:04:10.848000 audit[2821]: NETFILTER_CFG table=nat:85 family=2 entries=1 op=nft_register_rule pid=2821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:04:10.848000 audit[2821]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb2e2e170 a2=0 a3=7ffeb2e2e15c items=0 ppid=2744 pid=2821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:04:10.891000 audit[2825]: NETFILTER_CFG table=filter:86 family=2 entries=6 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:10.891000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdcf50e030 a2=0 a3=7ffdcf50e01c items=0 ppid=2744 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:10.897443 env[1412]: time="2024-02-09T19:04:10.897380732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-m4qqg,Uid:d58a6a5d-5655-43c2-9be8-2a8403339433,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:04:10.913000 audit[2825]: NETFILTER_CFG table=nat:87 family=2 entries=17 op=nft_register_chain pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:10.913000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffdcf50e030 a2=0 a3=7ffdcf50e01c items=0 ppid=2744 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:10.918000 audit[2829]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.918000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd6d845c20 a2=0 a3=7ffd6d845c0c items=0 ppid=2744 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:04:10.921000 audit[2831]: NETFILTER_CFG table=filter:89 family=10 entries=2 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.921000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff6c942c40 a2=0 a3=7fff6c942c2c items=0 ppid=2744 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:04:10.925000 audit[2834]: NETFILTER_CFG table=filter:90 family=10 entries=2 op=nft_register_chain pid=2834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.925000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffde7344200 a2=0 a3=7ffde73441ec items=0 ppid=2744 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:04:10.926000 audit[2835]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.926000 audit[2835]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe02d30950 a2=0 a3=7ffe02d3093c items=0 ppid=2744 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:04:10.928000 audit[2837]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.928000 audit[2837]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3e7cc770 a2=0 a3=7ffd3e7cc75c items=0 ppid=2744 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:04:10.931000 audit[2839]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.931000 audit[2839]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdedb81620 a2=0 a3=7ffdedb8160c items=0 ppid=2744 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:04:10.935000 audit[2844]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.935000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcc2c0490 a2=0 a3=7fffcc2c047c items=0 ppid=2744 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:04:10.942099 env[1412]: time="2024-02-09T19:04:10.941121962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:10.942099 env[1412]: time="2024-02-09T19:04:10.941270563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:10.942099 env[1412]: time="2024-02-09T19:04:10.941289664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:10.942099 env[1412]: time="2024-02-09T19:04:10.941555767Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b pid=2847 runtime=io.containerd.runc.v2 Feb 9 19:04:10.941000 audit[2854]: NETFILTER_CFG table=filter:95 family=10 entries=2 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.941000 audit[2854]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdf552a1d0 a2=0 a3=7ffdf552a1bc items=0 ppid=2744 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:04:10.943000 audit[2860]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_chain pid=2860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.943000 audit[2860]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1c227670 a2=0 a3=7ffe1c22765c items=0 ppid=2744 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:04:10.947000 audit[2862]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.947000 audit[2862]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc92113bb0 a2=0 a3=7ffc92113b9c items=0 ppid=2744 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:04:10.949000 audit[2868]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_chain pid=2868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.949000 audit[2868]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe30b49750 a2=0 a3=7ffe30b4973c items=0 ppid=2744 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:04:10.953000 audit[2870]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.953000 audit[2870]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2ea24f00 a2=0 a3=7fff2ea24eec items=0 ppid=2744 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:04:10.960000 audit[2876]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_rule pid=2876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.960000 audit[2876]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd87b42a40 a2=0 a3=7ffd87b42a2c items=0 ppid=2744 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:04:10.965000 audit[2879]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.965000 audit[2879]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe28b179f0 a2=0 a3=7ffe28b179dc items=0 ppid=2744 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:04:10.966000 audit[2880]: NETFILTER_CFG table=nat:102 family=10 entries=1 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.966000 audit[2880]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5570bcb0 a2=0 a3=7fff5570bc9c items=0 ppid=2744 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:04:10.970000 audit[2882]: NETFILTER_CFG table=nat:103 family=10 entries=2 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.970000 audit[2882]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdd8c1fc10 a2=0 a3=7ffdd8c1fbfc items=0 ppid=2744 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:04:10.974000 audit[2885]: NETFILTER_CFG table=nat:104 family=10 entries=2 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:04:10.974000 audit[2885]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffee4a60a30 a2=0 a3=7ffee4a60a1c items=0 ppid=2744 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:04:10.982000 audit[2891]: NETFILTER_CFG table=filter:105 family=10 entries=3 op=nft_register_rule pid=2891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:04:10.982000 audit[2891]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffde5964bd0 a2=0 a3=7ffde5964bbc items=0 ppid=2744 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.982000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:10.984000 audit[2891]: NETFILTER_CFG table=nat:106 family=10 entries=10 op=nft_register_chain pid=2891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:04:10.984000 audit[2891]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffde5964bd0 a2=0 a3=7ffde5964bbc items=0 ppid=2744 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:10.984000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:11.031292 env[1412]: time="2024-02-09T19:04:11.031232144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-m4qqg,Uid:d58a6a5d-5655-43c2-9be8-2a8403339433,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b\"" Feb 9 19:04:11.034112 env[1412]: time="2024-02-09T19:04:11.033581272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:04:11.190936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252679570.mount: Deactivated successfully. Feb 9 19:04:12.252442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870307938.mount: Deactivated successfully. Feb 9 19:04:13.562556 env[1412]: time="2024-02-09T19:04:13.562485128Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:13.572743 env[1412]: time="2024-02-09T19:04:13.572678443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:13.578231 env[1412]: time="2024-02-09T19:04:13.578174106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:13.585461 env[1412]: time="2024-02-09T19:04:13.585416888Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:13.586120 env[1412]: time="2024-02-09T19:04:13.586079795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:04:13.591080 env[1412]: time="2024-02-09T19:04:13.591045951Z" level=info msg="CreateContainer within sandbox \"4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:04:13.626739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449234689.mount: Deactivated successfully. Feb 9 19:04:13.640746 env[1412]: time="2024-02-09T19:04:13.640671712Z" level=info msg="CreateContainer within sandbox \"4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a\"" Feb 9 19:04:13.641519 env[1412]: time="2024-02-09T19:04:13.641427121Z" level=info msg="StartContainer for \"c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a\"" Feb 9 19:04:13.714625 env[1412]: time="2024-02-09T19:04:13.714537347Z" level=info msg="StartContainer for \"c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a\" returns successfully" Feb 9 19:04:14.468444 kubelet[2584]: I0209 19:04:14.468402 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qkkfr" podStartSLOduration=4.468335654 pod.CreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:11.462931351 +0000 UTC m=+13.346929963" watchObservedRunningTime="2024-02-09 19:04:14.468335654 +0000 UTC m=+16.352334366" Feb 9 19:04:14.470836 kubelet[2584]: I0209 19:04:14.470805 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-m4qqg" podStartSLOduration=-9.223372032384026e+09 pod.CreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="2024-02-09 19:04:11.032756462 +0000 UTC m=+12.916755074" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:14.468219953 +0000 UTC m=+16.352218665" watchObservedRunningTime="2024-02-09 19:04:14.470749681 +0000 UTC m=+16.354748293" Feb 9 19:04:14.619804 systemd[1]: run-containerd-runc-k8s.io-c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a-runc.sxRsnC.mount: Deactivated successfully. Feb 9 19:04:15.821055 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 19:04:15.821246 kernel: audit: type=1325 audit(1707505455.812:282): table=filter:107 family=2 entries=13 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:15.812000 audit[2970]: NETFILTER_CFG table=filter:107 family=2 entries=13 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:15.831045 kernel: audit: type=1300 audit(1707505455.812:282): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffea090e940 a2=0 a3=7ffea090e92c items=0 ppid=2744 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:15.812000 audit[2970]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffea090e940 a2=0 a3=7ffea090e92c items=0 ppid=2744 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:15.850318 kernel: audit: type=1327 audit(1707505455.812:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:15.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:15.813000 audit[2970]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:15.872213 kernel: audit: type=1325 audit(1707505455.813:283): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:15.813000 audit[2970]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffea090e940 a2=0 a3=7ffea090e92c items=0 ppid=2744 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:15.904068 kernel: audit: type=1300 audit(1707505455.813:283): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffea090e940 a2=0 a3=7ffea090e92c items=0 ppid=2744 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:15.919469 kubelet[2584]: I0209 19:04:15.919420 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:15.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:15.966062 kernel: audit: type=1327 audit(1707505455.813:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:16.031146 kubelet[2584]: I0209 19:04:16.031093 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2chm\" (UniqueName: \"kubernetes.io/projected/b5c6ab68-15e3-4128-9813-3062d8e337f8-kube-api-access-m2chm\") pod \"calico-typha-8499488978-2zppj\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " pod="calico-system/calico-typha-8499488978-2zppj" Feb 9 19:04:16.031363 kubelet[2584]: I0209 19:04:16.031180 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c6ab68-15e3-4128-9813-3062d8e337f8-tigera-ca-bundle\") pod \"calico-typha-8499488978-2zppj\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " pod="calico-system/calico-typha-8499488978-2zppj" Feb 9 19:04:16.031363 kubelet[2584]: I0209 19:04:16.031212 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5c6ab68-15e3-4128-9813-3062d8e337f8-typha-certs\") pod \"calico-typha-8499488978-2zppj\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " pod="calico-system/calico-typha-8499488978-2zppj" Feb 9 19:04:16.084280 kubelet[2584]: I0209 19:04:16.084120 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:16.089000 audit[2996]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:16.103063 kernel: audit: type=1325 audit(1707505456.089:284): table=filter:109 family=2 entries=14 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:16.089000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd13621c50 a2=0 a3=7ffd13621c3c items=0 ppid=2744 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:16.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:16.140611 kernel: audit: type=1300 audit(1707505456.089:284): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd13621c50 a2=0 a3=7ffd13621c3c items=0 ppid=2744 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:16.140807 kernel: audit: type=1327 audit(1707505456.089:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:16.111000 audit[2996]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:16.184043 kernel: audit: type=1325 audit(1707505456.111:285): table=nat:110 family=2 entries=20 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:16.111000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd13621c50 a2=0 a3=7ffd13621c3c items=0 ppid=2744 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:16.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:16.197616 kubelet[2584]: I0209 19:04:16.197559 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:16.198048 kubelet[2584]: E0209 19:04:16.198001 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:16.231901 env[1412]: time="2024-02-09T19:04:16.231821504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8499488978-2zppj,Uid:b5c6ab68-15e3-4128-9813-3062d8e337f8,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:16.236169 kubelet[2584]: I0209 19:04:16.236129 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-lib-modules\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236390 kubelet[2584]: I0209 19:04:16.236297 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-run-calico\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236390 kubelet[2584]: I0209 19:04:16.236330 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-lib-calico\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236490 kubelet[2584]: I0209 19:04:16.236422 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-flexvol-driver-host\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236490 kubelet[2584]: I0209 19:04:16.236474 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e7f171-50ff-46d8-a0ff-56b1574dfed7-tigera-ca-bundle\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236616 kubelet[2584]: I0209 19:04:16.236599 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26e7f171-50ff-46d8-a0ff-56b1574dfed7-node-certs\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236681 kubelet[2584]: I0209 19:04:16.236642 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-bin-dir\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236784 kubelet[2584]: I0209 19:04:16.236769 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-log-dir\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.236901 kubelet[2584]: I0209 19:04:16.236880 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-xtables-lock\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.237114 kubelet[2584]: I0209 19:04:16.237095 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-policysync\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.237216 kubelet[2584]: I0209 19:04:16.237179 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-net-dir\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.237270 kubelet[2584]: I0209 19:04:16.237246 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nzd\" (UniqueName: \"kubernetes.io/projected/26e7f171-50ff-46d8-a0ff-56b1574dfed7-kube-api-access-m5nzd\") pod \"calico-node-tng4c\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " pod="calico-system/calico-node-tng4c" Feb 9 19:04:16.296896 env[1412]: time="2024-02-09T19:04:16.296795991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:16.297247 env[1412]: time="2024-02-09T19:04:16.297213396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:16.297367 env[1412]: time="2024-02-09T19:04:16.297343997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:16.297692 env[1412]: time="2024-02-09T19:04:16.297618600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea pid=3005 runtime=io.containerd.runc.v2 Feb 9 19:04:16.341798 kubelet[2584]: I0209 19:04:16.341627 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b139bbd0-9b20-41a8-9896-7f2a7ac77265-socket-dir\") pod \"csi-node-driver-r2kjb\" (UID: \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\") " pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:16.341798 kubelet[2584]: I0209 19:04:16.341716 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b139bbd0-9b20-41a8-9896-7f2a7ac77265-registration-dir\") pod \"csi-node-driver-r2kjb\" (UID: \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\") " pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:16.341798 kubelet[2584]: I0209 19:04:16.341775 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5nc\" (UniqueName: \"kubernetes.io/projected/b139bbd0-9b20-41a8-9896-7f2a7ac77265-kube-api-access-zs5nc\") pod \"csi-node-driver-r2kjb\" (UID: \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\") " pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:16.342114 kubelet[2584]: I0209 19:04:16.341872 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b139bbd0-9b20-41a8-9896-7f2a7ac77265-varrun\") pod \"csi-node-driver-r2kjb\" (UID: \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\") " pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:16.342114 kubelet[2584]: I0209 19:04:16.341951 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b139bbd0-9b20-41a8-9896-7f2a7ac77265-kubelet-dir\") pod \"csi-node-driver-r2kjb\" (UID: \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\") " pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:16.349720 kubelet[2584]: E0209 19:04:16.349678 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.349981 kubelet[2584]: W0209 19:04:16.349960 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.350095 kubelet[2584]: E0209 19:04:16.350083 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.350398 kubelet[2584]: E0209 19:04:16.350387 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.350471 kubelet[2584]: W0209 19:04:16.350462 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.350550 kubelet[2584]: E0209 19:04:16.350542 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.350816 kubelet[2584]: E0209 19:04:16.350805 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.350894 kubelet[2584]: W0209 19:04:16.350885 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.350963 kubelet[2584]: E0209 19:04:16.350956 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.353254 kubelet[2584]: E0209 19:04:16.353232 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.353417 kubelet[2584]: W0209 19:04:16.353405 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.353499 kubelet[2584]: E0209 19:04:16.353490 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.353730 kubelet[2584]: E0209 19:04:16.353721 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.353797 kubelet[2584]: W0209 19:04:16.353788 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.353864 kubelet[2584]: E0209 19:04:16.353857 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.354206 kubelet[2584]: E0209 19:04:16.354191 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.354303 kubelet[2584]: W0209 19:04:16.354293 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.354369 kubelet[2584]: E0209 19:04:16.354362 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.355514 kubelet[2584]: E0209 19:04:16.355496 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.355627 kubelet[2584]: W0209 19:04:16.355616 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.355689 kubelet[2584]: E0209 19:04:16.355682 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.355954 kubelet[2584]: E0209 19:04:16.355945 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.356080 kubelet[2584]: W0209 19:04:16.356016 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.356163 kubelet[2584]: E0209 19:04:16.356154 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.356391 kubelet[2584]: E0209 19:04:16.356382 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.356463 kubelet[2584]: W0209 19:04:16.356454 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.356528 kubelet[2584]: E0209 19:04:16.356521 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.356741 kubelet[2584]: E0209 19:04:16.356733 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.356805 kubelet[2584]: W0209 19:04:16.356797 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.356865 kubelet[2584]: E0209 19:04:16.356859 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.357170 kubelet[2584]: E0209 19:04:16.357155 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.357552 kubelet[2584]: W0209 19:04:16.357520 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.357552 kubelet[2584]: E0209 19:04:16.357549 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.357881 kubelet[2584]: E0209 19:04:16.357868 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.357986 kubelet[2584]: W0209 19:04:16.357973 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.358156 kubelet[2584]: E0209 19:04:16.358145 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.359058 kubelet[2584]: E0209 19:04:16.359044 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.359166 kubelet[2584]: W0209 19:04:16.359155 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.359241 kubelet[2584]: E0209 19:04:16.359233 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.359508 kubelet[2584]: E0209 19:04:16.359499 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.359591 kubelet[2584]: W0209 19:04:16.359582 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.359677 kubelet[2584]: E0209 19:04:16.359670 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.359987 kubelet[2584]: E0209 19:04:16.359978 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.360148 kubelet[2584]: W0209 19:04:16.360137 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.360242 kubelet[2584]: E0209 19:04:16.360233 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.360557 kubelet[2584]: E0209 19:04:16.360545 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.370711 kubelet[2584]: W0209 19:04:16.370677 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.370918 kubelet[2584]: E0209 19:04:16.370907 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.374691 kubelet[2584]: E0209 19:04:16.374669 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.374840 kubelet[2584]: W0209 19:04:16.374825 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.374923 kubelet[2584]: E0209 19:04:16.374914 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.385259 kubelet[2584]: E0209 19:04:16.385222 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.385623 kubelet[2584]: W0209 19:04:16.385605 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.385733 kubelet[2584]: E0209 19:04:16.385720 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.386502 kubelet[2584]: E0209 19:04:16.386483 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.386661 kubelet[2584]: W0209 19:04:16.386649 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.386745 kubelet[2584]: E0209 19:04:16.386737 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.387225 kubelet[2584]: E0209 19:04:16.387211 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.387332 kubelet[2584]: W0209 19:04:16.387322 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.387663 kubelet[2584]: E0209 19:04:16.387647 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.387742 kubelet[2584]: W0209 19:04:16.387671 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.387898 kubelet[2584]: E0209 19:04:16.387887 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.387962 kubelet[2584]: W0209 19:04:16.387899 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.388136 kubelet[2584]: E0209 19:04:16.388115 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.388136 kubelet[2584]: W0209 19:04:16.388130 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.388228 kubelet[2584]: E0209 19:04:16.388148 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.388228 kubelet[2584]: E0209 19:04:16.388192 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.388363 kubelet[2584]: E0209 19:04:16.388348 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.388544 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.388678 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391052 kubelet[2584]: W0209 19:04:16.388688 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.388718 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.388931 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391052 kubelet[2584]: W0209 19:04:16.388939 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.388956 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.389173 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391052 kubelet[2584]: W0209 19:04:16.389188 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391052 kubelet[2584]: E0209 19:04:16.389204 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389482 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391557 kubelet[2584]: W0209 19:04:16.389492 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389515 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389702 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391557 kubelet[2584]: W0209 19:04:16.389710 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389724 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389880 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391557 kubelet[2584]: W0209 19:04:16.389888 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.389900 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.391557 kubelet[2584]: E0209 19:04:16.390096 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.391933 kubelet[2584]: W0209 19:04:16.390110 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.391933 kubelet[2584]: E0209 19:04:16.390124 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.433434 env[1412]: time="2024-02-09T19:04:16.433343336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8499488978-2zppj,Uid:b5c6ab68-15e3-4128-9813-3062d8e337f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\"" Feb 9 19:04:16.435921 env[1412]: time="2024-02-09T19:04:16.435874263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:04:16.449123 kubelet[2584]: E0209 19:04:16.449084 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.449418 kubelet[2584]: W0209 19:04:16.449399 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.449514 kubelet[2584]: E0209 19:04:16.449503 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.449961 kubelet[2584]: E0209 19:04:16.449938 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.450137 kubelet[2584]: W0209 19:04:16.450124 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.450218 kubelet[2584]: E0209 19:04:16.450210 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.450531 kubelet[2584]: E0209 19:04:16.450519 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.450623 kubelet[2584]: W0209 19:04:16.450613 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.450704 kubelet[2584]: E0209 19:04:16.450696 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.450998 kubelet[2584]: E0209 19:04:16.450986 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.451118 kubelet[2584]: W0209 19:04:16.451106 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.451195 kubelet[2584]: E0209 19:04:16.451188 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.452186 kubelet[2584]: E0209 19:04:16.452173 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.452284 kubelet[2584]: W0209 19:04:16.452274 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.452354 kubelet[2584]: E0209 19:04:16.452347 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.452603 kubelet[2584]: E0209 19:04:16.452592 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.452684 kubelet[2584]: W0209 19:04:16.452675 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.452756 kubelet[2584]: E0209 19:04:16.452746 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.453001 kubelet[2584]: E0209 19:04:16.452990 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.453116 kubelet[2584]: W0209 19:04:16.453105 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.453187 kubelet[2584]: E0209 19:04:16.453180 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.453424 kubelet[2584]: E0209 19:04:16.453414 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.453499 kubelet[2584]: W0209 19:04:16.453490 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.453567 kubelet[2584]: E0209 19:04:16.453560 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.453803 kubelet[2584]: E0209 19:04:16.453792 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.453890 kubelet[2584]: W0209 19:04:16.453881 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.453961 kubelet[2584]: E0209 19:04:16.453954 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.454259 kubelet[2584]: E0209 19:04:16.454248 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.454343 kubelet[2584]: W0209 19:04:16.454333 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.454428 kubelet[2584]: E0209 19:04:16.454420 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.454658 kubelet[2584]: E0209 19:04:16.454648 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.454736 kubelet[2584]: W0209 19:04:16.454727 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.454810 kubelet[2584]: E0209 19:04:16.454803 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.455076 kubelet[2584]: E0209 19:04:16.455065 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.455175 kubelet[2584]: W0209 19:04:16.455166 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.455249 kubelet[2584]: E0209 19:04:16.455242 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.455486 kubelet[2584]: E0209 19:04:16.455475 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.455567 kubelet[2584]: W0209 19:04:16.455558 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.455636 kubelet[2584]: E0209 19:04:16.455629 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.455878 kubelet[2584]: E0209 19:04:16.455868 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.455969 kubelet[2584]: W0209 19:04:16.455959 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.456120 kubelet[2584]: E0209 19:04:16.456110 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.457438 kubelet[2584]: E0209 19:04:16.457424 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.457548 kubelet[2584]: W0209 19:04:16.457538 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.457626 kubelet[2584]: E0209 19:04:16.457619 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.457901 kubelet[2584]: E0209 19:04:16.457890 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.457992 kubelet[2584]: W0209 19:04:16.457983 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.458138 kubelet[2584]: E0209 19:04:16.458130 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.458453 kubelet[2584]: E0209 19:04:16.458442 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.458539 kubelet[2584]: W0209 19:04:16.458530 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.458602 kubelet[2584]: E0209 19:04:16.458595 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.458914 kubelet[2584]: E0209 19:04:16.458902 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.458998 kubelet[2584]: W0209 19:04:16.458989 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.459202 kubelet[2584]: E0209 19:04:16.459189 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.459343 kubelet[2584]: E0209 19:04:16.459335 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.459410 kubelet[2584]: W0209 19:04:16.459401 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.459573 kubelet[2584]: E0209 19:04:16.459554 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.459685 kubelet[2584]: E0209 19:04:16.459678 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.459755 kubelet[2584]: W0209 19:04:16.459746 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.459944 kubelet[2584]: E0209 19:04:16.459930 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.460103 kubelet[2584]: E0209 19:04:16.460094 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.460173 kubelet[2584]: W0209 19:04:16.460165 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.460334 kubelet[2584]: E0209 19:04:16.460321 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.460456 kubelet[2584]: E0209 19:04:16.460448 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.460521 kubelet[2584]: W0209 19:04:16.460513 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.460591 kubelet[2584]: E0209 19:04:16.460585 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.460814 kubelet[2584]: E0209 19:04:16.460803 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.460888 kubelet[2584]: W0209 19:04:16.460879 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.460948 kubelet[2584]: E0209 19:04:16.460942 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.461325 kubelet[2584]: E0209 19:04:16.461312 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.461413 kubelet[2584]: W0209 19:04:16.461403 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.461496 kubelet[2584]: E0209 19:04:16.461488 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.461885 kubelet[2584]: E0209 19:04:16.461872 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.462014 kubelet[2584]: W0209 19:04:16.462002 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.462216 kubelet[2584]: E0209 19:04:16.462206 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.464182 kubelet[2584]: E0209 19:04:16.464170 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.464288 kubelet[2584]: W0209 19:04:16.464278 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.464362 kubelet[2584]: E0209 19:04:16.464355 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.536268 kubelet[2584]: E0209 19:04:16.536237 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.536517 kubelet[2584]: W0209 19:04:16.536502 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.536594 kubelet[2584]: E0209 19:04:16.536585 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.556963 kubelet[2584]: E0209 19:04:16.556930 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.557230 kubelet[2584]: W0209 19:04:16.557213 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.557345 kubelet[2584]: E0209 19:04:16.557335 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.658424 kubelet[2584]: E0209 19:04:16.658284 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.658424 kubelet[2584]: W0209 19:04:16.658315 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.658424 kubelet[2584]: E0209 19:04:16.658350 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.690433 env[1412]: time="2024-02-09T19:04:16.690351256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tng4c,Uid:26e7f171-50ff-46d8-a0ff-56b1574dfed7,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:16.747570 kubelet[2584]: E0209 19:04:16.747536 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:16.747570 kubelet[2584]: W0209 19:04:16.747562 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:16.747817 kubelet[2584]: E0209 19:04:16.747619 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:16.775035 env[1412]: time="2024-02-09T19:04:16.774926751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:16.775304 env[1412]: time="2024-02-09T19:04:16.775100853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:16.775304 env[1412]: time="2024-02-09T19:04:16.775139153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:16.775427 env[1412]: time="2024-02-09T19:04:16.775316255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79 pid=3111 runtime=io.containerd.runc.v2 Feb 9 19:04:16.892047 env[1412]: time="2024-02-09T19:04:16.891973690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tng4c,Uid:26e7f171-50ff-46d8-a0ff-56b1574dfed7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:17.214000 audit[3180]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:17.214000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff2ffb3690 a2=0 a3=7fff2ffb367c items=0 ppid=2744 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:17.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:17.215000 audit[3180]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:17.215000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff2ffb3690 a2=0 a3=7fff2ffb367c items=0 ppid=2744 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:17.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:18.374872 kubelet[2584]: E0209 19:04:18.374827 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:18.829505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069927586.mount: Deactivated successfully. Feb 9 19:04:20.372439 kubelet[2584]: E0209 19:04:20.372405 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:22.371349 kubelet[2584]: E0209 19:04:22.371301 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:24.371230 kubelet[2584]: E0209 19:04:24.371178 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:26.370601 kubelet[2584]: E0209 19:04:26.370560 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:28.371106 kubelet[2584]: E0209 19:04:28.371051 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:30.370618 kubelet[2584]: E0209 19:04:30.370565 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:32.370839 kubelet[2584]: E0209 19:04:32.370798 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:33.170966 env[1412]: time="2024-02-09T19:04:33.170896455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:33.179771 env[1412]: time="2024-02-09T19:04:33.179713222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:33.184508 env[1412]: time="2024-02-09T19:04:33.184464958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:33.188698 env[1412]: time="2024-02-09T19:04:33.188658489Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:33.189535 env[1412]: time="2024-02-09T19:04:33.189500196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:04:33.193050 env[1412]: time="2024-02-09T19:04:33.190928906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:04:33.209506 env[1412]: time="2024-02-09T19:04:33.209468047Z" level=info msg="CreateContainer within sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:04:33.238940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647145491.mount: Deactivated successfully. Feb 9 19:04:33.252615 env[1412]: time="2024-02-09T19:04:33.252570572Z" level=info msg="CreateContainer within sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\"" Feb 9 19:04:33.254491 env[1412]: time="2024-02-09T19:04:33.253216777Z" level=info msg="StartContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\"" Feb 9 19:04:33.334134 env[1412]: time="2024-02-09T19:04:33.334059188Z" level=info msg="StartContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" returns successfully" Feb 9 19:04:33.500069 env[1412]: time="2024-02-09T19:04:33.499920042Z" level=info msg="StopContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" with timeout 300 (s)" Feb 9 19:04:33.501015 env[1412]: time="2024-02-09T19:04:33.500479246Z" level=info msg="Stop container \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" with signal terminated" Feb 9 19:04:34.199975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0-rootfs.mount: Deactivated successfully. Feb 9 19:04:34.442202 kubelet[2584]: E0209 19:04:34.370520 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:34.656589 env[1412]: time="2024-02-09T19:04:34.656528397Z" level=info msg="shim disconnected" id=26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0 Feb 9 19:04:34.656589 env[1412]: time="2024-02-09T19:04:34.656590997Z" level=warning msg="cleaning up after shim disconnected" id=26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0 namespace=k8s.io Feb 9 19:04:34.657270 env[1412]: time="2024-02-09T19:04:34.656603697Z" level=info msg="cleaning up dead shim" Feb 9 19:04:34.668462 env[1412]: time="2024-02-09T19:04:34.668054683Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3249 runtime=io.containerd.runc.v2\n" Feb 9 19:04:34.680831 env[1412]: time="2024-02-09T19:04:34.680794977Z" level=info msg="StopContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" returns successfully" Feb 9 19:04:34.681503 env[1412]: time="2024-02-09T19:04:34.681472482Z" level=info msg="StopPodSandbox for \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\"" Feb 9 19:04:34.684869 env[1412]: time="2024-02-09T19:04:34.681548983Z" level=info msg="Container to stop \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:34.684635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea-shm.mount: Deactivated successfully. Feb 9 19:04:34.717256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea-rootfs.mount: Deactivated successfully. Feb 9 19:04:34.737521 env[1412]: time="2024-02-09T19:04:34.737405997Z" level=info msg="shim disconnected" id=7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea Feb 9 19:04:34.737705 env[1412]: time="2024-02-09T19:04:34.737519798Z" level=warning msg="cleaning up after shim disconnected" id=7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea namespace=k8s.io Feb 9 19:04:34.737705 env[1412]: time="2024-02-09T19:04:34.737532698Z" level=info msg="cleaning up dead shim" Feb 9 19:04:34.746359 env[1412]: time="2024-02-09T19:04:34.746327264Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3281 runtime=io.containerd.runc.v2\n" Feb 9 19:04:34.746666 env[1412]: time="2024-02-09T19:04:34.746636866Z" level=info msg="TearDown network for sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" successfully" Feb 9 19:04:34.746742 env[1412]: time="2024-02-09T19:04:34.746667066Z" level=info msg="StopPodSandbox for \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" returns successfully" Feb 9 19:04:34.786348 kubelet[2584]: E0209 19:04:34.786315 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.786348 kubelet[2584]: W0209 19:04:34.786337 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.786591 kubelet[2584]: E0209 19:04:34.786366 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.786591 kubelet[2584]: I0209 19:04:34.786420 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5c6ab68-15e3-4128-9813-3062d8e337f8-typha-certs\") pod \"b5c6ab68-15e3-4128-9813-3062d8e337f8\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " Feb 9 19:04:34.786715 kubelet[2584]: E0209 19:04:34.786674 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.786715 kubelet[2584]: W0209 19:04:34.786689 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.786715 kubelet[2584]: E0209 19:04:34.786711 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.786895 kubelet[2584]: I0209 19:04:34.786751 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c6ab68-15e3-4128-9813-3062d8e337f8-tigera-ca-bundle\") pod \"b5c6ab68-15e3-4128-9813-3062d8e337f8\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " Feb 9 19:04:34.787048 kubelet[2584]: E0209 19:04:34.786968 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.787048 kubelet[2584]: W0209 19:04:34.786983 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.787211 kubelet[2584]: E0209 19:04:34.787060 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.787211 kubelet[2584]: I0209 19:04:34.787106 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2chm\" (UniqueName: \"kubernetes.io/projected/b5c6ab68-15e3-4128-9813-3062d8e337f8-kube-api-access-m2chm\") pod \"b5c6ab68-15e3-4128-9813-3062d8e337f8\" (UID: \"b5c6ab68-15e3-4128-9813-3062d8e337f8\") " Feb 9 19:04:34.787646 kubelet[2584]: E0209 19:04:34.787430 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.787646 kubelet[2584]: W0209 19:04:34.787448 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.787646 kubelet[2584]: E0209 19:04:34.787468 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.788834 kubelet[2584]: E0209 19:04:34.788813 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.788994 kubelet[2584]: W0209 19:04:34.788975 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.789135 kubelet[2584]: E0209 19:04:34.789119 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.794962 systemd[1]: var-lib-kubelet-pods-b5c6ab68\x2d15e3\x2d4128\x2d9813\x2d3062d8e337f8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 9 19:04:34.800923 systemd[1]: var-lib-kubelet-pods-b5c6ab68\x2d15e3\x2d4128\x2d9813\x2d3062d8e337f8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 9 19:04:34.802600 kubelet[2584]: I0209 19:04:34.802561 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c6ab68-15e3-4128-9813-3062d8e337f8-kube-api-access-m2chm" (OuterVolumeSpecName: "kube-api-access-m2chm") pod "b5c6ab68-15e3-4128-9813-3062d8e337f8" (UID: "b5c6ab68-15e3-4128-9813-3062d8e337f8"). InnerVolumeSpecName "kube-api-access-m2chm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:34.802697 kubelet[2584]: I0209 19:04:34.802667 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c6ab68-15e3-4128-9813-3062d8e337f8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "b5c6ab68-15e3-4128-9813-3062d8e337f8" (UID: "b5c6ab68-15e3-4128-9813-3062d8e337f8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:34.802859 kubelet[2584]: E0209 19:04:34.802842 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:34.802859 kubelet[2584]: W0209 19:04:34.802857 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:34.802979 kubelet[2584]: E0209 19:04:34.802878 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:34.803060 kubelet[2584]: W0209 19:04:34.803004 2584 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b5c6ab68-15e3-4128-9813-3062d8e337f8/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:04:34.803273 kubelet[2584]: I0209 19:04:34.803250 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c6ab68-15e3-4128-9813-3062d8e337f8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b5c6ab68-15e3-4128-9813-3062d8e337f8" (UID: "b5c6ab68-15e3-4128-9813-3062d8e337f8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:34.887482 kubelet[2584]: I0209 19:04:34.887430 2584 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c6ab68-15e3-4128-9813-3062d8e337f8-tigera-ca-bundle\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:34.887482 kubelet[2584]: I0209 19:04:34.887480 2584 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-m2chm\" (UniqueName: \"kubernetes.io/projected/b5c6ab68-15e3-4128-9813-3062d8e337f8-kube-api-access-m2chm\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:34.887762 kubelet[2584]: I0209 19:04:34.887503 2584 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5c6ab68-15e3-4128-9813-3062d8e337f8-typha-certs\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:35.199589 systemd[1]: var-lib-kubelet-pods-b5c6ab68\x2d15e3\x2d4128\x2d9813\x2d3062d8e337f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm2chm.mount: Deactivated successfully. Feb 9 19:04:35.503626 kubelet[2584]: I0209 19:04:35.503590 2584 scope.go:115] "RemoveContainer" containerID="26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0" Feb 9 19:04:35.505445 env[1412]: time="2024-02-09T19:04:35.505402136Z" level=info msg="RemoveContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\"" Feb 9 19:04:35.522125 env[1412]: time="2024-02-09T19:04:35.522079758Z" level=info msg="RemoveContainer for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" returns successfully" Feb 9 19:04:35.522505 kubelet[2584]: I0209 19:04:35.522466 2584 scope.go:115] "RemoveContainer" containerID="26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0" Feb 9 19:04:35.522800 env[1412]: time="2024-02-09T19:04:35.522709862Z" level=error msg="ContainerStatus for \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\": not found" Feb 9 19:04:35.522982 kubelet[2584]: E0209 19:04:35.522949 2584 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\": not found" containerID="26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0" Feb 9 19:04:35.523072 kubelet[2584]: I0209 19:04:35.522990 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0} err="failed to get container status \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"26f46a2fa6882974af5a0feeda260811fe662af1fc485832b5a2f63876a76fa0\": not found" Feb 9 19:04:35.547043 kubelet[2584]: I0209 19:04:35.545910 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:35.547043 kubelet[2584]: E0209 19:04:35.545992 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5c6ab68-15e3-4128-9813-3062d8e337f8" containerName="calico-typha" Feb 9 19:04:35.547043 kubelet[2584]: I0209 19:04:35.546047 2584 memory_manager.go:346] "RemoveStaleState removing state" podUID="b5c6ab68-15e3-4128-9813-3062d8e337f8" containerName="calico-typha" Feb 9 19:04:35.576731 kubelet[2584]: E0209 19:04:35.576701 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.576974 kubelet[2584]: W0209 19:04:35.576957 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.577078 kubelet[2584]: E0209 19:04:35.577068 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.577341 kubelet[2584]: E0209 19:04:35.577330 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.577416 kubelet[2584]: W0209 19:04:35.577406 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.577489 kubelet[2584]: E0209 19:04:35.577479 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.577747 kubelet[2584]: E0209 19:04:35.577731 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.577861 kubelet[2584]: W0209 19:04:35.577848 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.577940 kubelet[2584]: E0209 19:04:35.577931 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.578275 kubelet[2584]: E0209 19:04:35.578261 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.578380 kubelet[2584]: W0209 19:04:35.578366 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.578465 kubelet[2584]: E0209 19:04:35.578454 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.578740 kubelet[2584]: E0209 19:04:35.578727 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.578840 kubelet[2584]: W0209 19:04:35.578827 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.578934 kubelet[2584]: E0209 19:04:35.578924 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.579478 kubelet[2584]: E0209 19:04:35.579462 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.579607 kubelet[2584]: W0209 19:04:35.579584 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.579688 kubelet[2584]: E0209 19:04:35.579609 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.579896 kubelet[2584]: E0209 19:04:35.579883 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.580003 kubelet[2584]: W0209 19:04:35.579988 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.580131 kubelet[2584]: E0209 19:04:35.580119 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.580449 kubelet[2584]: E0209 19:04:35.580434 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.580648 kubelet[2584]: W0209 19:04:35.580634 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.580757 kubelet[2584]: E0209 19:04:35.580747 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.581071 kubelet[2584]: E0209 19:04:35.581057 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.581196 kubelet[2584]: W0209 19:04:35.581182 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.581299 kubelet[2584]: E0209 19:04:35.581289 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.592370 kubelet[2584]: E0209 19:04:35.592349 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.592370 kubelet[2584]: W0209 19:04:35.592368 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.592550 kubelet[2584]: E0209 19:04:35.592391 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.592550 kubelet[2584]: I0209 19:04:35.592436 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d20b0220-7e40-4243-b7e3-6f014e0f35df-typha-certs\") pod \"calico-typha-54f94bb648-r7k8f\" (UID: \"d20b0220-7e40-4243-b7e3-6f014e0f35df\") " pod="calico-system/calico-typha-54f94bb648-r7k8f" Feb 9 19:04:35.592785 kubelet[2584]: E0209 19:04:35.592765 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.592785 kubelet[2584]: W0209 19:04:35.592784 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.592904 kubelet[2584]: E0209 19:04:35.592806 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.592904 kubelet[2584]: I0209 19:04:35.592842 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d20b0220-7e40-4243-b7e3-6f014e0f35df-tigera-ca-bundle\") pod \"calico-typha-54f94bb648-r7k8f\" (UID: \"d20b0220-7e40-4243-b7e3-6f014e0f35df\") " pod="calico-system/calico-typha-54f94bb648-r7k8f" Feb 9 19:04:35.593099 kubelet[2584]: E0209 19:04:35.593081 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.593175 kubelet[2584]: W0209 19:04:35.593099 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.593175 kubelet[2584]: E0209 19:04:35.593116 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.593333 kubelet[2584]: E0209 19:04:35.593317 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.593390 kubelet[2584]: W0209 19:04:35.593334 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.593390 kubelet[2584]: E0209 19:04:35.593359 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.593650 kubelet[2584]: E0209 19:04:35.593630 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.593721 kubelet[2584]: W0209 19:04:35.593654 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.593721 kubelet[2584]: E0209 19:04:35.593673 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.593721 kubelet[2584]: I0209 19:04:35.593702 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zb7\" (UniqueName: \"kubernetes.io/projected/d20b0220-7e40-4243-b7e3-6f014e0f35df-kube-api-access-55zb7\") pod \"calico-typha-54f94bb648-r7k8f\" (UID: \"d20b0220-7e40-4243-b7e3-6f014e0f35df\") " pod="calico-system/calico-typha-54f94bb648-r7k8f" Feb 9 19:04:35.593920 kubelet[2584]: E0209 19:04:35.593905 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.593982 kubelet[2584]: W0209 19:04:35.593921 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.593982 kubelet[2584]: E0209 19:04:35.593945 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.594170 kubelet[2584]: E0209 19:04:35.594156 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.594228 kubelet[2584]: W0209 19:04:35.594172 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.594228 kubelet[2584]: E0209 19:04:35.594196 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.594443 kubelet[2584]: E0209 19:04:35.594426 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.594443 kubelet[2584]: W0209 19:04:35.594441 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.594581 kubelet[2584]: E0209 19:04:35.594468 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.594675 kubelet[2584]: E0209 19:04:35.594661 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.594731 kubelet[2584]: W0209 19:04:35.594676 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.594731 kubelet[2584]: E0209 19:04:35.594700 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.597000 audit[3360]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.609042 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:04:35.609145 kernel: audit: type=1325 audit(1707505475.597:288): table=filter:113 family=2 entries=14 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.597000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe1c547070 a2=0 a3=7ffe1c54705c items=0 ppid=2744 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.636815 kernel: audit: type=1300 audit(1707505475.597:288): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe1c547070 a2=0 a3=7ffe1c54705c items=0 ppid=2744 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.658089 kernel: audit: type=1327 audit(1707505475.597:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.658247 kernel: audit: type=1325 audit(1707505475.597:289): table=nat:114 family=2 entries=20 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.597000 audit[3360]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.597000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe1c547070 a2=0 a3=7ffe1c54705c items=0 ppid=2744 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.682085 kernel: audit: type=1300 audit(1707505475.597:289): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe1c547070 a2=0 a3=7ffe1c54705c items=0 ppid=2744 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.682222 kernel: audit: type=1327 audit(1707505475.597:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.657000 audit[3386]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.702732 kernel: audit: type=1325 audit(1707505475.657:290): table=filter:115 family=2 entries=14 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.657000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd93f6d9a0 a2=0 a3=7ffd93f6d98c items=0 ppid=2744 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.703227 kubelet[2584]: E0209 19:04:35.703210 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.703340 kubelet[2584]: W0209 19:04:35.703327 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.703409 kubelet[2584]: E0209 19:04:35.703399 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.706134 kubelet[2584]: E0209 19:04:35.706115 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.706257 kubelet[2584]: W0209 19:04:35.706246 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.706330 kubelet[2584]: E0209 19:04:35.706321 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.706611 kubelet[2584]: E0209 19:04:35.706602 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.706689 kubelet[2584]: W0209 19:04:35.706678 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.706751 kubelet[2584]: E0209 19:04:35.706743 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.706985 kubelet[2584]: E0209 19:04:35.706975 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.707131 kubelet[2584]: W0209 19:04:35.707120 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.707201 kubelet[2584]: E0209 19:04:35.707194 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.707477 kubelet[2584]: E0209 19:04:35.707468 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.707549 kubelet[2584]: W0209 19:04:35.707540 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.707607 kubelet[2584]: E0209 19:04:35.707600 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.707829 kubelet[2584]: E0209 19:04:35.707820 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.707896 kubelet[2584]: W0209 19:04:35.707888 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.707958 kubelet[2584]: E0209 19:04:35.707952 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.708183 kubelet[2584]: E0209 19:04:35.708175 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.708253 kubelet[2584]: W0209 19:04:35.708244 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.708311 kubelet[2584]: E0209 19:04:35.708304 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.708528 kubelet[2584]: E0209 19:04:35.708519 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.708596 kubelet[2584]: W0209 19:04:35.708587 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.708673 kubelet[2584]: E0209 19:04:35.708666 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.708883 kubelet[2584]: E0209 19:04:35.708874 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.708948 kubelet[2584]: W0209 19:04:35.708939 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.709006 kubelet[2584]: E0209 19:04:35.709000 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.709277 kubelet[2584]: E0209 19:04:35.709268 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.709351 kubelet[2584]: W0209 19:04:35.709342 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.709412 kubelet[2584]: E0209 19:04:35.709405 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.709624 kubelet[2584]: E0209 19:04:35.709616 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.709692 kubelet[2584]: W0209 19:04:35.709683 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.709747 kubelet[2584]: E0209 19:04:35.709741 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.709980 kubelet[2584]: E0209 19:04:35.709972 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.710064 kubelet[2584]: W0209 19:04:35.710055 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.710121 kubelet[2584]: E0209 19:04:35.710115 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.710615 kubelet[2584]: E0209 19:04:35.710603 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.710694 kubelet[2584]: W0209 19:04:35.710684 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.710755 kubelet[2584]: E0209 19:04:35.710748 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.717845 kubelet[2584]: E0209 19:04:35.717832 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.717950 kubelet[2584]: W0209 19:04:35.717939 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.718040 kubelet[2584]: E0209 19:04:35.718014 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.724398 kernel: audit: type=1300 audit(1707505475.657:290): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd93f6d9a0 a2=0 a3=7ffd93f6d98c items=0 ppid=2744 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.724717 kubelet[2584]: E0209 19:04:35.724701 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.724836 kubelet[2584]: W0209 19:04:35.724820 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.724921 kubelet[2584]: E0209 19:04:35.724911 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.726790 kubelet[2584]: E0209 19:04:35.726773 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.726911 kubelet[2584]: W0209 19:04:35.726899 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.727073 kubelet[2584]: E0209 19:04:35.727061 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.727293 kubelet[2584]: E0209 19:04:35.727283 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.727365 kubelet[2584]: W0209 19:04:35.727356 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.727430 kubelet[2584]: E0209 19:04:35.727424 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.659000 audit[3386]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.747366 kubelet[2584]: E0209 19:04:35.747351 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:35.747507 kubelet[2584]: W0209 19:04:35.747492 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:35.747606 kubelet[2584]: E0209 19:04:35.747596 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:35.749417 kernel: audit: type=1327 audit(1707505475.657:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.749496 kernel: audit: type=1325 audit(1707505475.659:291): table=nat:116 family=2 entries=20 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:35.659000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd93f6d9a0 a2=0 a3=7ffd93f6d98c items=0 ppid=2744 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:35.851891 env[1412]: time="2024-02-09T19:04:35.851746763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f94bb648-r7k8f,Uid:d20b0220-7e40-4243-b7e3-6f014e0f35df,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:35.906398 env[1412]: time="2024-02-09T19:04:35.906322462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:35.906398 env[1412]: time="2024-02-09T19:04:35.906361362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:35.906626 env[1412]: time="2024-02-09T19:04:35.906382462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:35.907075 env[1412]: time="2024-02-09T19:04:35.906956566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b45fb8addba96c8c77357625d6dd3e30d7fb290831f14979fb748e5043b3d98 pid=3415 runtime=io.containerd.runc.v2 Feb 9 19:04:35.968210 env[1412]: time="2024-02-09T19:04:35.968158413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f94bb648-r7k8f,Uid:d20b0220-7e40-4243-b7e3-6f014e0f35df,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b45fb8addba96c8c77357625d6dd3e30d7fb290831f14979fb748e5043b3d98\"" Feb 9 19:04:35.981779 env[1412]: time="2024-02-09T19:04:35.981639611Z" level=info msg="CreateContainer within sandbox \"9b45fb8addba96c8c77357625d6dd3e30d7fb290831f14979fb748e5043b3d98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:04:36.043595 env[1412]: time="2024-02-09T19:04:36.043494558Z" level=info msg="CreateContainer within sandbox \"9b45fb8addba96c8c77357625d6dd3e30d7fb290831f14979fb748e5043b3d98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bd73929b385120c01baee5361fdef72927b127d60ac27d1bc4a292938bb73f14\"" Feb 9 19:04:36.045836 env[1412]: time="2024-02-09T19:04:36.045799374Z" level=info msg="StartContainer for \"bd73929b385120c01baee5361fdef72927b127d60ac27d1bc4a292938bb73f14\"" Feb 9 19:04:36.132671 env[1412]: time="2024-02-09T19:04:36.131970392Z" level=info msg="StartContainer for \"bd73929b385120c01baee5361fdef72927b127d60ac27d1bc4a292938bb73f14\" returns successfully" Feb 9 19:04:36.371525 kubelet[2584]: E0209 19:04:36.370687 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:36.377401 kubelet[2584]: I0209 19:04:36.377156 2584 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b5c6ab68-15e3-4128-9813-3062d8e337f8 path="/var/lib/kubelet/pods/b5c6ab68-15e3-4128-9813-3062d8e337f8/volumes" Feb 9 19:04:36.522886 kubelet[2584]: I0209 19:04:36.522851 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54f94bb648-r7k8f" podStartSLOduration=20.522804696 pod.CreationTimestamp="2024-02-09 19:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:36.522594195 +0000 UTC m=+38.406592907" watchObservedRunningTime="2024-02-09 19:04:36.522804696 +0000 UTC m=+38.406803308" Feb 9 19:04:36.590459 kubelet[2584]: E0209 19:04:36.590413 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.590459 kubelet[2584]: W0209 19:04:36.590441 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.590459 kubelet[2584]: E0209 19:04:36.590470 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.590913 kubelet[2584]: E0209 19:04:36.590735 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.590913 kubelet[2584]: W0209 19:04:36.590747 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.590913 kubelet[2584]: E0209 19:04:36.590769 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.591081 kubelet[2584]: E0209 19:04:36.590954 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.591081 kubelet[2584]: W0209 19:04:36.590964 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.591081 kubelet[2584]: E0209 19:04:36.590980 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.591259 kubelet[2584]: E0209 19:04:36.591235 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.591259 kubelet[2584]: W0209 19:04:36.591257 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.591386 kubelet[2584]: E0209 19:04:36.591274 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.591481 kubelet[2584]: E0209 19:04:36.591463 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.591562 kubelet[2584]: W0209 19:04:36.591482 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.591562 kubelet[2584]: E0209 19:04:36.591502 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.591707 kubelet[2584]: E0209 19:04:36.591692 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.591767 kubelet[2584]: W0209 19:04:36.591708 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.591767 kubelet[2584]: E0209 19:04:36.591727 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.592009 kubelet[2584]: E0209 19:04:36.591990 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.592009 kubelet[2584]: W0209 19:04:36.592004 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.592208 kubelet[2584]: E0209 19:04:36.592038 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.592281 kubelet[2584]: E0209 19:04:36.592266 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.592334 kubelet[2584]: W0209 19:04:36.592283 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.592334 kubelet[2584]: E0209 19:04:36.592308 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.592525 kubelet[2584]: E0209 19:04:36.592509 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.592525 kubelet[2584]: W0209 19:04:36.592522 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.592651 kubelet[2584]: E0209 19:04:36.592547 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.592809 kubelet[2584]: E0209 19:04:36.592786 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.592809 kubelet[2584]: W0209 19:04:36.592799 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.592948 kubelet[2584]: E0209 19:04:36.592815 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.593051 kubelet[2584]: E0209 19:04:36.593016 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.593121 kubelet[2584]: W0209 19:04:36.593053 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.593121 kubelet[2584]: E0209 19:04:36.593072 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.593271 kubelet[2584]: E0209 19:04:36.593253 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.593271 kubelet[2584]: W0209 19:04:36.593267 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.593370 kubelet[2584]: E0209 19:04:36.593283 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.614660 kubelet[2584]: E0209 19:04:36.614639 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.614660 kubelet[2584]: W0209 19:04:36.614655 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.614841 kubelet[2584]: E0209 19:04:36.614673 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.614964 kubelet[2584]: E0209 19:04:36.614948 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.614964 kubelet[2584]: W0209 19:04:36.614961 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.615093 kubelet[2584]: E0209 19:04:36.614982 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.615261 kubelet[2584]: E0209 19:04:36.615246 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.615261 kubelet[2584]: W0209 19:04:36.615258 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.615379 kubelet[2584]: E0209 19:04:36.615281 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.615523 kubelet[2584]: E0209 19:04:36.615507 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.615523 kubelet[2584]: W0209 19:04:36.615520 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.615646 kubelet[2584]: E0209 19:04:36.615541 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.615752 kubelet[2584]: E0209 19:04:36.615738 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.615752 kubelet[2584]: W0209 19:04:36.615750 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.615881 kubelet[2584]: E0209 19:04:36.615771 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.615983 kubelet[2584]: E0209 19:04:36.615970 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.615983 kubelet[2584]: W0209 19:04:36.615982 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.616103 kubelet[2584]: E0209 19:04:36.616083 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.616263 kubelet[2584]: E0209 19:04:36.616248 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.616263 kubelet[2584]: W0209 19:04:36.616259 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.616417 kubelet[2584]: E0209 19:04:36.616405 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.616571 kubelet[2584]: E0209 19:04:36.616439 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.616647 kubelet[2584]: W0209 19:04:36.616570 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.616737 kubelet[2584]: E0209 19:04:36.616725 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.616817 kubelet[2584]: E0209 19:04:36.616753 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.616887 kubelet[2584]: W0209 19:04:36.616815 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.616887 kubelet[2584]: E0209 19:04:36.616832 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.617224 kubelet[2584]: E0209 19:04:36.617207 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.617224 kubelet[2584]: W0209 19:04:36.617220 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.617373 kubelet[2584]: E0209 19:04:36.617239 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.617425 kubelet[2584]: E0209 19:04:36.617413 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.617425 kubelet[2584]: W0209 19:04:36.617423 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.617508 kubelet[2584]: E0209 19:04:36.617439 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.617659 kubelet[2584]: E0209 19:04:36.617643 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.617659 kubelet[2584]: W0209 19:04:36.617657 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.617793 kubelet[2584]: E0209 19:04:36.617678 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.618344 kubelet[2584]: E0209 19:04:36.618327 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.618344 kubelet[2584]: W0209 19:04:36.618339 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.618490 kubelet[2584]: E0209 19:04:36.618360 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.618612 kubelet[2584]: E0209 19:04:36.618595 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.618676 kubelet[2584]: W0209 19:04:36.618613 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.618766 kubelet[2584]: E0209 19:04:36.618755 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.618850 kubelet[2584]: E0209 19:04:36.618793 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.618921 kubelet[2584]: W0209 19:04:36.618849 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.619012 kubelet[2584]: E0209 19:04:36.619000 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.619156 kubelet[2584]: E0209 19:04:36.619076 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.619227 kubelet[2584]: W0209 19:04:36.619155 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.619227 kubelet[2584]: E0209 19:04:36.619172 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.619405 kubelet[2584]: E0209 19:04:36.619389 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.619405 kubelet[2584]: W0209 19:04:36.619402 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.619516 kubelet[2584]: E0209 19:04:36.619418 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:36.619791 kubelet[2584]: E0209 19:04:36.619776 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:36.619791 kubelet[2584]: W0209 19:04:36.619789 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:36.619890 kubelet[2584]: E0209 19:04:36.619805 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.113511 update_engine[1376]: I0209 19:04:37.113456 1376 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:04:37.113511 update_engine[1376]: I0209 19:04:37.113504 1376 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:04:37.114428 update_engine[1376]: I0209 19:04:37.113637 1376 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:04:37.114428 update_engine[1376]: I0209 19:04:37.114178 1376 omaha_request_params.cc:62] Current group set to lts Feb 9 19:04:37.114428 update_engine[1376]: I0209 19:04:37.114382 1376 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:04:37.114428 update_engine[1376]: I0209 19:04:37.114392 1376 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:04:37.114692 update_engine[1376]: I0209 19:04:37.114666 1376 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:04:37.114754 update_engine[1376]: I0209 19:04:37.114723 1376 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:04:37.114824 update_engine[1376]: I0209 19:04:37.114809 1376 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:04:37.114824 update_engine[1376]: I0209 19:04:37.114821 1376 omaha_request_action.cc:271] Request: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.114824 update_engine[1376]: Feb 9 19:04:37.115559 update_engine[1376]: I0209 19:04:37.114828 1376 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:04:37.115916 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:04:37.116555 update_engine[1376]: I0209 19:04:37.116530 1376 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:04:37.116758 update_engine[1376]: I0209 19:04:37.116739 1376 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:04:37.132176 update_engine[1376]: E0209 19:04:37.132140 1376 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:04:37.132314 update_engine[1376]: I0209 19:04:37.132267 1376 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:04:37.601178 kubelet[2584]: E0209 19:04:37.600984 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.601178 kubelet[2584]: W0209 19:04:37.601014 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.601178 kubelet[2584]: E0209 19:04:37.601057 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.601920 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.602750 kubelet[2584]: W0209 19:04:37.601940 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.601994 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.602263 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.602750 kubelet[2584]: W0209 19:04:37.602274 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.602291 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.602504 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.602750 kubelet[2584]: W0209 19:04:37.602513 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.602529 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.602750 kubelet[2584]: E0209 19:04:37.602668 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.603292 kubelet[2584]: W0209 19:04:37.602676 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.603292 kubelet[2584]: E0209 19:04:37.602688 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.604004 kubelet[2584]: E0209 19:04:37.603533 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.604004 kubelet[2584]: W0209 19:04:37.603548 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.604004 kubelet[2584]: E0209 19:04:37.603566 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.604004 kubelet[2584]: E0209 19:04:37.603839 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.604004 kubelet[2584]: W0209 19:04:37.603853 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.604004 kubelet[2584]: E0209 19:04:37.603874 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.604711 kubelet[2584]: E0209 19:04:37.604400 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.604711 kubelet[2584]: W0209 19:04:37.604414 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.604711 kubelet[2584]: E0209 19:04:37.604433 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.604711 kubelet[2584]: E0209 19:04:37.604608 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.604711 kubelet[2584]: W0209 19:04:37.604616 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.604711 kubelet[2584]: E0209 19:04:37.604630 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.605273 kubelet[2584]: E0209 19:04:37.605168 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.605273 kubelet[2584]: W0209 19:04:37.605181 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.605273 kubelet[2584]: E0209 19:04:37.605197 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.608341 kubelet[2584]: E0209 19:04:37.605391 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.608341 kubelet[2584]: W0209 19:04:37.605400 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.608341 kubelet[2584]: E0209 19:04:37.605414 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.608635 kubelet[2584]: E0209 19:04:37.608529 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.608635 kubelet[2584]: W0209 19:04:37.608542 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.608635 kubelet[2584]: E0209 19:04:37.608559 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.617000 audit[3561]: NETFILTER_CFG table=filter:117 family=2 entries=13 op=nft_register_rule pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:37.617000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffd34677d80 a2=0 a3=7ffd34677d6c items=0 ppid=2744 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:37.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:37.619000 audit[3561]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:04:37.619000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffd34677d80 a2=0 a3=7ffd34677d6c items=0 ppid=2744 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:37.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:04:37.625479 kubelet[2584]: E0209 19:04:37.625264 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.625479 kubelet[2584]: W0209 19:04:37.625292 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.625479 kubelet[2584]: E0209 19:04:37.625323 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.626671 kubelet[2584]: E0209 19:04:37.626640 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.626671 kubelet[2584]: W0209 19:04:37.626665 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.626863 kubelet[2584]: E0209 19:04:37.626691 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.627088 kubelet[2584]: E0209 19:04:37.627036 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.627088 kubelet[2584]: W0209 19:04:37.627051 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.627088 kubelet[2584]: E0209 19:04:37.627071 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.627350 kubelet[2584]: E0209 19:04:37.627334 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.627350 kubelet[2584]: W0209 19:04:37.627347 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.627490 kubelet[2584]: E0209 19:04:37.627369 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.627604 kubelet[2584]: E0209 19:04:37.627589 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.627604 kubelet[2584]: W0209 19:04:37.627602 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.627756 kubelet[2584]: E0209 19:04:37.627743 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.627842 kubelet[2584]: E0209 19:04:37.627786 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.627924 kubelet[2584]: W0209 19:04:37.627841 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.628047 kubelet[2584]: E0209 19:04:37.628004 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.628113 kubelet[2584]: E0209 19:04:37.628065 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.628113 kubelet[2584]: W0209 19:04:37.628075 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.628113 kubelet[2584]: E0209 19:04:37.628094 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.628280 kubelet[2584]: E0209 19:04:37.628269 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.628341 kubelet[2584]: W0209 19:04:37.628281 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.628341 kubelet[2584]: E0209 19:04:37.628300 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.628555 kubelet[2584]: E0209 19:04:37.628540 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.628555 kubelet[2584]: W0209 19:04:37.628552 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.628680 kubelet[2584]: E0209 19:04:37.628575 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.628855 kubelet[2584]: E0209 19:04:37.628840 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.628855 kubelet[2584]: W0209 19:04:37.628853 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.628987 kubelet[2584]: E0209 19:04:37.628873 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.629638 kubelet[2584]: E0209 19:04:37.629622 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.629638 kubelet[2584]: W0209 19:04:37.629635 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.629762 kubelet[2584]: E0209 19:04:37.629730 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.629918 kubelet[2584]: E0209 19:04:37.629905 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.629918 kubelet[2584]: W0209 19:04:37.629916 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.630056 kubelet[2584]: E0209 19:04:37.630012 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.630215 kubelet[2584]: E0209 19:04:37.630201 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.630285 kubelet[2584]: W0209 19:04:37.630215 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.630285 kubelet[2584]: E0209 19:04:37.630235 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.630489 kubelet[2584]: E0209 19:04:37.630474 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.630489 kubelet[2584]: W0209 19:04:37.630487 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.630596 kubelet[2584]: E0209 19:04:37.630506 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.630950 kubelet[2584]: E0209 19:04:37.630905 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.630950 kubelet[2584]: W0209 19:04:37.630918 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.630950 kubelet[2584]: E0209 19:04:37.630938 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631218 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.632839 kubelet[2584]: W0209 19:04:37.631227 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631238 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631418 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.632839 kubelet[2584]: W0209 19:04:37.631425 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631435 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631699 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:04:37.632839 kubelet[2584]: W0209 19:04:37.631706 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:04:37.632839 kubelet[2584]: E0209 19:04:37.631716 2584 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:04:38.360842 env[1412]: time="2024-02-09T19:04:38.360764676Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:38.368151 env[1412]: time="2024-02-09T19:04:38.368104827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:38.370977 kubelet[2584]: E0209 19:04:38.370873 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:38.377168 env[1412]: time="2024-02-09T19:04:38.377128990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:38.384348 env[1412]: time="2024-02-09T19:04:38.384307940Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:38.385298 env[1412]: time="2024-02-09T19:04:38.385259046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:04:38.389323 env[1412]: time="2024-02-09T19:04:38.389267874Z" level=info msg="CreateContainer within sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:04:38.439653 env[1412]: time="2024-02-09T19:04:38.439590323Z" level=info msg="CreateContainer within sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\"" Feb 9 19:04:38.442805 env[1412]: time="2024-02-09T19:04:38.440330328Z" level=info msg="StartContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\"" Feb 9 19:04:38.518768 env[1412]: time="2024-02-09T19:04:38.518696872Z" level=info msg="StartContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" returns successfully" Feb 9 19:04:38.524782 env[1412]: time="2024-02-09T19:04:38.524725014Z" level=info msg="StopContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" with timeout 5 (s)" Feb 9 19:04:38.525185 env[1412]: time="2024-02-09T19:04:38.525145817Z" level=info msg="Stop container \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" with signal terminated" Feb 9 19:04:38.582784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7-rootfs.mount: Deactivated successfully. Feb 9 19:04:38.769060 env[1412]: time="2024-02-09T19:04:38.768983608Z" level=info msg="shim disconnected" id=5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7 Feb 9 19:04:38.769368 env[1412]: time="2024-02-09T19:04:38.769067609Z" level=warning msg="cleaning up after shim disconnected" id=5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7 namespace=k8s.io Feb 9 19:04:38.769368 env[1412]: time="2024-02-09T19:04:38.769082009Z" level=info msg="cleaning up dead shim" Feb 9 19:04:38.778978 env[1412]: time="2024-02-09T19:04:38.778921577Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3643 runtime=io.containerd.runc.v2\n" Feb 9 19:04:38.785729 env[1412]: time="2024-02-09T19:04:38.785684624Z" level=info msg="StopContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" returns successfully" Feb 9 19:04:38.787241 env[1412]: time="2024-02-09T19:04:38.786689331Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:38.787414 env[1412]: time="2024-02-09T19:04:38.787349436Z" level=info msg="Container to stop \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:38.790225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79-shm.mount: Deactivated successfully. Feb 9 19:04:38.828796 env[1412]: time="2024-02-09T19:04:38.828731923Z" level=info msg="shim disconnected" id=1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79 Feb 9 19:04:38.828796 env[1412]: time="2024-02-09T19:04:38.828800023Z" level=warning msg="cleaning up after shim disconnected" id=1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79 namespace=k8s.io Feb 9 19:04:38.829166 env[1412]: time="2024-02-09T19:04:38.828812123Z" level=info msg="cleaning up dead shim" Feb 9 19:04:38.839814 env[1412]: time="2024-02-09T19:04:38.839755799Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3675 runtime=io.containerd.runc.v2\n" Feb 9 19:04:38.840187 env[1412]: time="2024-02-09T19:04:38.840150502Z" level=info msg="TearDown network for sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" successfully" Feb 9 19:04:38.840304 env[1412]: time="2024-02-09T19:04:38.840184702Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" returns successfully" Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935150 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nzd\" (UniqueName: \"kubernetes.io/projected/26e7f171-50ff-46d8-a0ff-56b1574dfed7-kube-api-access-m5nzd\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935231 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e7f171-50ff-46d8-a0ff-56b1574dfed7-tigera-ca-bundle\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935264 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-net-dir\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935296 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-lib-calico\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935331 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-log-dir\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.936950 kubelet[2584]: I0209 19:04:38.935359 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-xtables-lock\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935395 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26e7f171-50ff-46d8-a0ff-56b1574dfed7-node-certs\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935426 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-flexvol-driver-host\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935458 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-bin-dir\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935489 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-policysync\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935527 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-run-calico\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.937834 kubelet[2584]: I0209 19:04:38.935559 2584 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-lib-modules\") pod \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\" (UID: \"26e7f171-50ff-46d8-a0ff-56b1574dfed7\") " Feb 9 19:04:38.938144 kubelet[2584]: I0209 19:04:38.935646 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938144 kubelet[2584]: I0209 19:04:38.936136 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938144 kubelet[2584]: W0209 19:04:38.936321 2584 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/26e7f171-50ff-46d8-a0ff-56b1574dfed7/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:04:38.938144 kubelet[2584]: I0209 19:04:38.936544 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e7f171-50ff-46d8-a0ff-56b1574dfed7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:38.938144 kubelet[2584]: I0209 19:04:38.936573 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938361 kubelet[2584]: I0209 19:04:38.936595 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938361 kubelet[2584]: I0209 19:04:38.936610 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938361 kubelet[2584]: I0209 19:04:38.936626 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938361 kubelet[2584]: I0209 19:04:38.936796 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938361 kubelet[2584]: I0209 19:04:38.936816 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-policysync" (OuterVolumeSpecName: "policysync") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.938571 kubelet[2584]: I0209 19:04:38.936838 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:38.941366 kubelet[2584]: I0209 19:04:38.941336 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e7f171-50ff-46d8-a0ff-56b1574dfed7-kube-api-access-m5nzd" (OuterVolumeSpecName: "kube-api-access-m5nzd") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "kube-api-access-m5nzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:38.943795 kubelet[2584]: I0209 19:04:38.943766 2584 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e7f171-50ff-46d8-a0ff-56b1574dfed7-node-certs" (OuterVolumeSpecName: "node-certs") pod "26e7f171-50ff-46d8-a0ff-56b1574dfed7" (UID: "26e7f171-50ff-46d8-a0ff-56b1574dfed7"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036720 2584 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e7f171-50ff-46d8-a0ff-56b1574dfed7-tigera-ca-bundle\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036764 2584 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-net-dir\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036786 2584 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-m5nzd\" (UniqueName: \"kubernetes.io/projected/26e7f171-50ff-46d8-a0ff-56b1574dfed7-kube-api-access-m5nzd\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036804 2584 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-lib-calico\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036822 2584 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26e7f171-50ff-46d8-a0ff-56b1574dfed7-node-certs\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036839 2584 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-log-dir\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036856 2584 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-xtables-lock\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037085 kubelet[2584]: I0209 19:04:39.036873 2584 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-flexvol-driver-host\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037685 kubelet[2584]: I0209 19:04:39.036890 2584 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-cni-bin-dir\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037685 kubelet[2584]: I0209 19:04:39.036907 2584 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-var-run-calico\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037685 kubelet[2584]: I0209 19:04:39.036924 2584 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-policysync\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.037685 kubelet[2584]: I0209 19:04:39.036941 2584 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26e7f171-50ff-46d8-a0ff-56b1574dfed7-lib-modules\") on node \"ci-3510.3.2-a-00ed68a33d\" DevicePath \"\"" Feb 9 19:04:39.425146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79-rootfs.mount: Deactivated successfully. Feb 9 19:04:39.425364 systemd[1]: var-lib-kubelet-pods-26e7f171\x2d50ff\x2d46d8\x2da0ff\x2d56b1574dfed7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm5nzd.mount: Deactivated successfully. Feb 9 19:04:39.425492 systemd[1]: var-lib-kubelet-pods-26e7f171\x2d50ff\x2d46d8\x2da0ff\x2d56b1574dfed7-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 9 19:04:39.524840 kubelet[2584]: I0209 19:04:39.524806 2584 scope.go:115] "RemoveContainer" containerID="5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7" Feb 9 19:04:39.527754 env[1412]: time="2024-02-09T19:04:39.527688012Z" level=info msg="RemoveContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\"" Feb 9 19:04:39.545691 env[1412]: time="2024-02-09T19:04:39.545641835Z" level=info msg="RemoveContainer for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" returns successfully" Feb 9 19:04:39.545935 kubelet[2584]: I0209 19:04:39.545908 2584 scope.go:115] "RemoveContainer" containerID="5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7" Feb 9 19:04:39.546538 env[1412]: time="2024-02-09T19:04:39.546419940Z" level=error msg="ContainerStatus for \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\": not found" Feb 9 19:04:39.550109 kubelet[2584]: E0209 19:04:39.549552 2584 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\": not found" containerID="5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7" Feb 9 19:04:39.550109 kubelet[2584]: I0209 19:04:39.549606 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7} err="failed to get container status \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d567dc1b0e22b23fcddb27e44a38320b8652f00ef34dbcdb1dda03fa5f336b7\": not found" Feb 9 19:04:39.580036 kubelet[2584]: I0209 19:04:39.579976 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:39.580343 kubelet[2584]: E0209 19:04:39.580326 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26e7f171-50ff-46d8-a0ff-56b1574dfed7" containerName="flexvol-driver" Feb 9 19:04:39.580464 kubelet[2584]: I0209 19:04:39.580454 2584 memory_manager.go:346] "RemoveStaleState removing state" podUID="26e7f171-50ff-46d8-a0ff-56b1574dfed7" containerName="flexvol-driver" Feb 9 19:04:39.640360 kubelet[2584]: I0209 19:04:39.640320 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-xtables-lock\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640686 kubelet[2584]: I0209 19:04:39.640661 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-cni-log-dir\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640803 kubelet[2584]: I0209 19:04:39.640714 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-lib-modules\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640803 kubelet[2584]: I0209 19:04:39.640754 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-policysync\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640803 kubelet[2584]: I0209 19:04:39.640796 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-flexvol-driver-host\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640986 kubelet[2584]: I0209 19:04:39.640838 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-cni-bin-dir\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640986 kubelet[2584]: I0209 19:04:39.640882 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f1b65d-9ac0-4571-8393-344894a79156-tigera-ca-bundle\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640986 kubelet[2584]: I0209 19:04:39.640924 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6js\" (UniqueName: \"kubernetes.io/projected/a1f1b65d-9ac0-4571-8393-344894a79156-kube-api-access-vq6js\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.640986 kubelet[2584]: I0209 19:04:39.640965 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a1f1b65d-9ac0-4571-8393-344894a79156-node-certs\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.641234 kubelet[2584]: I0209 19:04:39.641008 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-var-lib-calico\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.641234 kubelet[2584]: I0209 19:04:39.641075 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-var-run-calico\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.641234 kubelet[2584]: I0209 19:04:39.641115 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a1f1b65d-9ac0-4571-8393-344894a79156-cni-net-dir\") pod \"calico-node-jwtqh\" (UID: \"a1f1b65d-9ac0-4571-8393-344894a79156\") " pod="calico-system/calico-node-jwtqh" Feb 9 19:04:39.895848 env[1412]: time="2024-02-09T19:04:39.895385022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jwtqh,Uid:a1f1b65d-9ac0-4571-8393-344894a79156,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:39.960287 env[1412]: time="2024-02-09T19:04:39.952980615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:39.960287 env[1412]: time="2024-02-09T19:04:39.953042415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:39.960287 env[1412]: time="2024-02-09T19:04:39.953056815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:39.960287 env[1412]: time="2024-02-09T19:04:39.953232617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5 pid=3699 runtime=io.containerd.runc.v2 Feb 9 19:04:40.040093 env[1412]: time="2024-02-09T19:04:40.039986704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jwtqh,Uid:a1f1b65d-9ac0-4571-8393-344894a79156,Namespace:calico-system,Attempt:0,} returns sandbox id \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\"" Feb 9 19:04:40.043446 env[1412]: time="2024-02-09T19:04:40.043391627Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:04:40.103897 env[1412]: time="2024-02-09T19:04:40.103833833Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6b57229b796d52604b11ef649c9d27524680b6d7b1b36d21b108a5beac2eb2d\"" Feb 9 19:04:40.104785 env[1412]: time="2024-02-09T19:04:40.104744539Z" level=info msg="StartContainer for \"f6b57229b796d52604b11ef649c9d27524680b6d7b1b36d21b108a5beac2eb2d\"" Feb 9 19:04:40.212331 env[1412]: time="2024-02-09T19:04:40.212176461Z" level=info msg="StartContainer for \"f6b57229b796d52604b11ef649c9d27524680b6d7b1b36d21b108a5beac2eb2d\" returns successfully" Feb 9 19:04:40.308833 env[1412]: time="2024-02-09T19:04:40.308754309Z" level=info msg="shim disconnected" id=f6b57229b796d52604b11ef649c9d27524680b6d7b1b36d21b108a5beac2eb2d Feb 9 19:04:40.308833 env[1412]: time="2024-02-09T19:04:40.308830210Z" level=warning msg="cleaning up after shim disconnected" id=f6b57229b796d52604b11ef649c9d27524680b6d7b1b36d21b108a5beac2eb2d namespace=k8s.io Feb 9 19:04:40.308833 env[1412]: time="2024-02-09T19:04:40.308843810Z" level=info msg="cleaning up dead shim" Feb 9 19:04:40.323544 env[1412]: time="2024-02-09T19:04:40.323480708Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3777 runtime=io.containerd.runc.v2\n" Feb 9 19:04:40.372505 kubelet[2584]: E0209 19:04:40.370857 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:40.375000 env[1412]: time="2024-02-09T19:04:40.373595845Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:40.375182 env[1412]: time="2024-02-09T19:04:40.375120455Z" level=info msg="TearDown network for sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" successfully" Feb 9 19:04:40.375332 env[1412]: time="2024-02-09T19:04:40.375184756Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" returns successfully" Feb 9 19:04:40.375554 kubelet[2584]: I0209 19:04:40.375534 2584 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=26e7f171-50ff-46d8-a0ff-56b1574dfed7 path="/var/lib/kubelet/pods/26e7f171-50ff-46d8-a0ff-56b1574dfed7/volumes" Feb 9 19:04:40.535246 env[1412]: time="2024-02-09T19:04:40.531463505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:04:42.370883 kubelet[2584]: E0209 19:04:42.370828 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:42.433647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161375084.mount: Deactivated successfully. Feb 9 19:04:44.371593 kubelet[2584]: E0209 19:04:44.370943 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:46.372277 kubelet[2584]: E0209 19:04:46.370729 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:47.091736 update_engine[1376]: I0209 19:04:47.091100 1376 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:04:47.091736 update_engine[1376]: I0209 19:04:47.091429 1376 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:04:47.091736 update_engine[1376]: I0209 19:04:47.091675 1376 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:04:47.096416 update_engine[1376]: E0209 19:04:47.096244 1376 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:04:47.096416 update_engine[1376]: I0209 19:04:47.096380 1376 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:04:48.127439 env[1412]: time="2024-02-09T19:04:48.127302537Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:48.142091 env[1412]: time="2024-02-09T19:04:48.141946624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:48.156558 env[1412]: time="2024-02-09T19:04:48.156417310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:48.203250 env[1412]: time="2024-02-09T19:04:48.203152289Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:48.218615 env[1412]: time="2024-02-09T19:04:48.217116672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:04:48.237440 env[1412]: time="2024-02-09T19:04:48.237309392Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:04:48.304392 env[1412]: time="2024-02-09T19:04:48.304254891Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163\"" Feb 9 19:04:48.305898 env[1412]: time="2024-02-09T19:04:48.305822300Z" level=info msg="StartContainer for \"331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163\"" Feb 9 19:04:48.384455 systemd[1]: run-containerd-runc-k8s.io-331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163-runc.ny3r9w.mount: Deactivated successfully. Feb 9 19:04:48.388305 kubelet[2584]: E0209 19:04:48.384616 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:48.509381 env[1412]: time="2024-02-09T19:04:48.509259812Z" level=info msg="StartContainer for \"331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163\" returns successfully" Feb 9 19:04:50.344256 env[1412]: time="2024-02-09T19:04:50.344149504Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:04:50.372453 kubelet[2584]: E0209 19:04:50.372391 2584 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:50.407570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163-rootfs.mount: Deactivated successfully. Feb 9 19:04:50.442054 kubelet[2584]: I0209 19:04:50.438210 2584 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:04:50.466197 kubelet[2584]: I0209 19:04:50.466144 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:50.473381 kubelet[2584]: I0209 19:04:50.473326 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:50.479985 kubelet[2584]: I0209 19:04:50.479940 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:50.644053 kubelet[2584]: I0209 19:04:50.643900 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k299m\" (UniqueName: \"kubernetes.io/projected/61c035ba-b6dd-472f-9488-d6ad43894181-kube-api-access-k299m\") pod \"coredns-787d4945fb-q7cfj\" (UID: \"61c035ba-b6dd-472f-9488-d6ad43894181\") " pod="kube-system/coredns-787d4945fb-q7cfj" Feb 9 19:04:50.644348 kubelet[2584]: I0209 19:04:50.644332 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eccece5-9f29-4db5-bff8-1f226e4e1432-tigera-ca-bundle\") pod \"calico-kube-controllers-b4fdbd88f-42mmn\" (UID: \"6eccece5-9f29-4db5-bff8-1f226e4e1432\") " pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" Feb 9 19:04:50.644479 kubelet[2584]: I0209 19:04:50.644466 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hhn\" (UniqueName: \"kubernetes.io/projected/74ec09d8-0957-412d-816d-b7f3528f1e43-kube-api-access-d9hhn\") pod \"coredns-787d4945fb-whh7p\" (UID: \"74ec09d8-0957-412d-816d-b7f3528f1e43\") " pod="kube-system/coredns-787d4945fb-whh7p" Feb 9 19:04:50.644684 kubelet[2584]: I0209 19:04:50.644661 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdzg\" (UniqueName: \"kubernetes.io/projected/6eccece5-9f29-4db5-bff8-1f226e4e1432-kube-api-access-pjdzg\") pod \"calico-kube-controllers-b4fdbd88f-42mmn\" (UID: \"6eccece5-9f29-4db5-bff8-1f226e4e1432\") " pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" Feb 9 19:04:50.644800 kubelet[2584]: I0209 19:04:50.644710 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74ec09d8-0957-412d-816d-b7f3528f1e43-config-volume\") pod \"coredns-787d4945fb-whh7p\" (UID: \"74ec09d8-0957-412d-816d-b7f3528f1e43\") " pod="kube-system/coredns-787d4945fb-whh7p" Feb 9 19:04:50.644800 kubelet[2584]: I0209 19:04:50.644751 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c035ba-b6dd-472f-9488-d6ad43894181-config-volume\") pod \"coredns-787d4945fb-q7cfj\" (UID: \"61c035ba-b6dd-472f-9488-d6ad43894181\") " pod="kube-system/coredns-787d4945fb-q7cfj" Feb 9 19:04:52.051735 env[1412]: time="2024-02-09T19:04:52.051052311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q7cfj,Uid:61c035ba-b6dd-472f-9488-d6ad43894181,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:52.054458 env[1412]: time="2024-02-09T19:04:52.054058728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4fdbd88f-42mmn,Uid:6eccece5-9f29-4db5-bff8-1f226e4e1432,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:52.055333 env[1412]: time="2024-02-09T19:04:52.055081234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-whh7p,Uid:74ec09d8-0957-412d-816d-b7f3528f1e43,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:52.068684 env[1412]: time="2024-02-09T19:04:52.068636610Z" level=info msg="shim disconnected" id=331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163 Feb 9 19:04:52.068684 env[1412]: time="2024-02-09T19:04:52.068685911Z" level=warning msg="cleaning up after shim disconnected" id=331f1fde39c62082d9bfaf12efb7be59cc04d2878c64b1766efff9f820bcf163 namespace=k8s.io Feb 9 19:04:52.068684 env[1412]: time="2024-02-09T19:04:52.068697511Z" level=info msg="cleaning up dead shim" Feb 9 19:04:52.079558 env[1412]: time="2024-02-09T19:04:52.079501272Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\n" Feb 9 19:04:52.306081 env[1412]: time="2024-02-09T19:04:52.305843849Z" level=error msg="Failed to destroy network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.306891 env[1412]: time="2024-02-09T19:04:52.306834955Z" level=error msg="encountered an error cleaning up failed sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.307184 env[1412]: time="2024-02-09T19:04:52.307132356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q7cfj,Uid:61c035ba-b6dd-472f-9488-d6ad43894181,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.307682 kubelet[2584]: E0209 19:04:52.307640 2584 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.308215 kubelet[2584]: E0209 19:04:52.307738 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-q7cfj" Feb 9 19:04:52.308215 kubelet[2584]: E0209 19:04:52.307773 2584 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-q7cfj" Feb 9 19:04:52.308215 kubelet[2584]: E0209 19:04:52.307854 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-q7cfj_kube-system(61c035ba-b6dd-472f-9488-d6ad43894181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-q7cfj_kube-system(61c035ba-b6dd-472f-9488-d6ad43894181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q7cfj" podUID=61c035ba-b6dd-472f-9488-d6ad43894181 Feb 9 19:04:52.315105 env[1412]: time="2024-02-09T19:04:52.315038201Z" level=error msg="Failed to destroy network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.315719 env[1412]: time="2024-02-09T19:04:52.315672404Z" level=error msg="encountered an error cleaning up failed sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.315929 env[1412]: time="2024-02-09T19:04:52.315886606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-whh7p,Uid:74ec09d8-0957-412d-816d-b7f3528f1e43,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.316753 kubelet[2584]: E0209 19:04:52.316303 2584 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.316753 kubelet[2584]: E0209 19:04:52.316373 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-whh7p" Feb 9 19:04:52.316753 kubelet[2584]: E0209 19:04:52.316405 2584 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-whh7p" Feb 9 19:04:52.317015 kubelet[2584]: E0209 19:04:52.316489 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-whh7p_kube-system(74ec09d8-0957-412d-816d-b7f3528f1e43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-whh7p_kube-system(74ec09d8-0957-412d-816d-b7f3528f1e43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-whh7p" podUID=74ec09d8-0957-412d-816d-b7f3528f1e43 Feb 9 19:04:52.324633 env[1412]: time="2024-02-09T19:04:52.324569455Z" level=error msg="Failed to destroy network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.325008 env[1412]: time="2024-02-09T19:04:52.324965257Z" level=error msg="encountered an error cleaning up failed sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.325121 env[1412]: time="2024-02-09T19:04:52.325045057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4fdbd88f-42mmn,Uid:6eccece5-9f29-4db5-bff8-1f226e4e1432,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.325712 kubelet[2584]: E0209 19:04:52.325328 2584 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.325712 kubelet[2584]: E0209 19:04:52.325392 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" Feb 9 19:04:52.325712 kubelet[2584]: E0209 19:04:52.325416 2584 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" Feb 9 19:04:52.327409 kubelet[2584]: E0209 19:04:52.325485 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b4fdbd88f-42mmn_calico-system(6eccece5-9f29-4db5-bff8-1f226e4e1432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b4fdbd88f-42mmn_calico-system(6eccece5-9f29-4db5-bff8-1f226e4e1432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" podUID=6eccece5-9f29-4db5-bff8-1f226e4e1432 Feb 9 19:04:52.375743 env[1412]: time="2024-02-09T19:04:52.375662943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2kjb,Uid:b139bbd0-9b20-41a8-9896-7f2a7ac77265,Namespace:calico-system,Attempt:0,}" Feb 9 19:04:52.466116 env[1412]: time="2024-02-09T19:04:52.465999853Z" level=error msg="Failed to destroy network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.466510 env[1412]: time="2024-02-09T19:04:52.466468655Z" level=error msg="encountered an error cleaning up failed sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.466623 env[1412]: time="2024-02-09T19:04:52.466538456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2kjb,Uid:b139bbd0-9b20-41a8-9896-7f2a7ac77265,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.466848 kubelet[2584]: E0209 19:04:52.466813 2584 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.466963 kubelet[2584]: E0209 19:04:52.466885 2584 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:52.466963 kubelet[2584]: E0209 19:04:52.466928 2584 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2kjb" Feb 9 19:04:52.467078 kubelet[2584]: E0209 19:04:52.467010 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r2kjb_calico-system(b139bbd0-9b20-41a8-9896-7f2a7ac77265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r2kjb_calico-system(b139bbd0-9b20-41a8-9896-7f2a7ac77265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:52.572067 kubelet[2584]: I0209 19:04:52.570621 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:04:52.573764 env[1412]: time="2024-02-09T19:04:52.573103257Z" level=info msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" Feb 9 19:04:52.576852 kubelet[2584]: I0209 19:04:52.576074 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:04:52.577278 env[1412]: time="2024-02-09T19:04:52.577233380Z" level=info msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" Feb 9 19:04:52.579356 kubelet[2584]: I0209 19:04:52.579294 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:04:52.583082 env[1412]: time="2024-02-09T19:04:52.583039413Z" level=info msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" Feb 9 19:04:52.588516 env[1412]: time="2024-02-09T19:04:52.588481344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:04:52.590214 kubelet[2584]: I0209 19:04:52.589796 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:04:52.590526 env[1412]: time="2024-02-09T19:04:52.590497055Z" level=info msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" Feb 9 19:04:52.843696 env[1412]: time="2024-02-09T19:04:52.842551478Z" level=error msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" failed" error="failed to destroy network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.843874 kubelet[2584]: E0209 19:04:52.843480 2584 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:04:52.843874 kubelet[2584]: E0209 19:04:52.843543 2584 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840} Feb 9 19:04:52.843874 kubelet[2584]: E0209 19:04:52.843604 2584 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6eccece5-9f29-4db5-bff8-1f226e4e1432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:04:52.843874 kubelet[2584]: E0209 19:04:52.843639 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6eccece5-9f29-4db5-bff8-1f226e4e1432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" podUID=6eccece5-9f29-4db5-bff8-1f226e4e1432 Feb 9 19:04:52.844259 env[1412]: time="2024-02-09T19:04:52.844209287Z" level=error msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" failed" error="failed to destroy network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.844476 kubelet[2584]: E0209 19:04:52.844435 2584 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:04:52.844581 kubelet[2584]: E0209 19:04:52.844478 2584 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b} Feb 9 19:04:52.844581 kubelet[2584]: E0209 19:04:52.844519 2584 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61c035ba-b6dd-472f-9488-d6ad43894181\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:04:52.844581 kubelet[2584]: E0209 19:04:52.844557 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61c035ba-b6dd-472f-9488-d6ad43894181\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q7cfj" podUID=61c035ba-b6dd-472f-9488-d6ad43894181 Feb 9 19:04:52.845237 env[1412]: time="2024-02-09T19:04:52.845189993Z" level=error msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" failed" error="failed to destroy network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.845396 kubelet[2584]: E0209 19:04:52.845378 2584 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:04:52.845479 kubelet[2584]: E0209 19:04:52.845411 2584 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49} Feb 9 19:04:52.845479 kubelet[2584]: E0209 19:04:52.845452 2584 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:04:52.845594 kubelet[2584]: E0209 19:04:52.845506 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b139bbd0-9b20-41a8-9896-7f2a7ac77265\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2kjb" podUID=b139bbd0-9b20-41a8-9896-7f2a7ac77265 Feb 9 19:04:52.850561 env[1412]: time="2024-02-09T19:04:52.850514223Z" level=error msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" failed" error="failed to destroy network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:04:52.850717 kubelet[2584]: E0209 19:04:52.850699 2584 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:04:52.850824 kubelet[2584]: E0209 19:04:52.850733 2584 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455} Feb 9 19:04:52.850824 kubelet[2584]: E0209 19:04:52.850774 2584 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74ec09d8-0957-412d-816d-b7f3528f1e43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:04:52.850824 kubelet[2584]: E0209 19:04:52.850812 2584 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74ec09d8-0957-412d-816d-b7f3528f1e43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-whh7p" podUID=74ec09d8-0957-412d-816d-b7f3528f1e43 Feb 9 19:04:53.150110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840-shm.mount: Deactivated successfully. Feb 9 19:04:53.150337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455-shm.mount: Deactivated successfully. Feb 9 19:04:53.150498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b-shm.mount: Deactivated successfully. Feb 9 19:04:57.086511 update_engine[1376]: I0209 19:04:57.086436 1376 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:04:57.087234 update_engine[1376]: I0209 19:04:57.086925 1376 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:04:57.087305 update_engine[1376]: I0209 19:04:57.087258 1376 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:04:57.091825 update_engine[1376]: E0209 19:04:57.091791 1376 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:04:57.091980 update_engine[1376]: I0209 19:04:57.091958 1376 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 19:04:58.851518 env[1412]: time="2024-02-09T19:04:58.851456708Z" level=info msg="StopPodSandbox for \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\"" Feb 9 19:04:58.852418 env[1412]: time="2024-02-09T19:04:58.852344013Z" level=info msg="TearDown network for sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" successfully" Feb 9 19:04:58.853743 env[1412]: time="2024-02-09T19:04:58.853711720Z" level=info msg="StopPodSandbox for \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" returns successfully" Feb 9 19:04:58.854343 env[1412]: time="2024-02-09T19:04:58.854315123Z" level=info msg="RemovePodSandbox for \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\"" Feb 9 19:04:58.854498 env[1412]: time="2024-02-09T19:04:58.854453824Z" level=info msg="Forcibly stopping sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\"" Feb 9 19:04:58.854666 env[1412]: time="2024-02-09T19:04:58.854643325Z" level=info msg="TearDown network for sandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" successfully" Feb 9 19:04:58.865387 env[1412]: time="2024-02-09T19:04:58.865336781Z" level=info msg="RemovePodSandbox \"7564127d1ce4c502035f616606ec53c97fcbd63a4efd3fe8a34ee48a4d6464ea\" returns successfully" Feb 9 19:04:58.866276 env[1412]: time="2024-02-09T19:04:58.866240886Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:58.866623 env[1412]: time="2024-02-09T19:04:58.866557687Z" level=info msg="TearDown network for sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" successfully" Feb 9 19:04:58.866755 env[1412]: time="2024-02-09T19:04:58.866736688Z" level=info msg="StopPodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" returns successfully" Feb 9 19:04:58.867168 env[1412]: time="2024-02-09T19:04:58.867140190Z" level=info msg="RemovePodSandbox for \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:58.867335 env[1412]: time="2024-02-09T19:04:58.867293891Z" level=info msg="Forcibly stopping sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\"" Feb 9 19:04:58.867485 env[1412]: time="2024-02-09T19:04:58.867456092Z" level=info msg="TearDown network for sandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" successfully" Feb 9 19:04:58.876773 env[1412]: time="2024-02-09T19:04:58.876705941Z" level=info msg="RemovePodSandbox \"1996decad2ea51e12b211583699655e423fc3f087f7544ab47d2f148f77b9a79\" returns successfully" Feb 9 19:05:01.093373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753143434.mount: Deactivated successfully. Feb 9 19:05:01.228655 env[1412]: time="2024-02-09T19:05:01.228593753Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:01.237438 env[1412]: time="2024-02-09T19:05:01.237371198Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:01.241862 env[1412]: time="2024-02-09T19:05:01.241806720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:01.246514 env[1412]: time="2024-02-09T19:05:01.246453344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:01.247082 env[1412]: time="2024-02-09T19:05:01.247013247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:05:01.268440 env[1412]: time="2024-02-09T19:05:01.268387155Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:05:01.299494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274361.mount: Deactivated successfully. Feb 9 19:05:01.319069 env[1412]: time="2024-02-09T19:05:01.318988612Z" level=info msg="CreateContainer within sandbox \"e98e258e8c55cd68159f2e9754e16e23b9684a5a9caeae50b97c277248e0f2e5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b\"" Feb 9 19:05:01.322053 env[1412]: time="2024-02-09T19:05:01.321885926Z" level=info msg="StartContainer for \"07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b\"" Feb 9 19:05:01.413994 env[1412]: time="2024-02-09T19:05:01.413177389Z" level=info msg="StartContainer for \"07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b\" returns successfully" Feb 9 19:05:01.738122 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:05:01.738353 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:05:03.163556 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:05:03.163804 kernel: audit: type=1400 audit(1707505503.142:294): avc: denied { write } for pid=4235 comm="tee" name="fd" dev="proc" ino=32545 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.142000 audit[4235]: AVC avc: denied { write } for pid=4235 comm="tee" name="fd" dev="proc" ino=32545 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.165000 audit[4239]: AVC avc: denied { write } for pid=4239 comm="tee" name="fd" dev="proc" ino=32550 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.184300 kernel: audit: type=1400 audit(1707505503.165:295): avc: denied { write } for pid=4239 comm="tee" name="fd" dev="proc" ino=32550 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.165000 audit[4239]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffaab58962 a2=241 a3=1b6 items=1 ppid=4201 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.210211 kernel: audit: type=1300 audit(1707505503.165:295): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffaab58962 a2=241 a3=1b6 items=1 ppid=4201 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.165000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:05:03.234055 kernel: audit: type=1307 audit(1707505503.165:295): cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:05:03.165000 audit: PATH item=0 name="/dev/fd/63" inode=32535 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.264458 kernel: audit: type=1302 audit(1707505503.165:295): item=0 name="/dev/fd/63" inode=32535 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.264655 kernel: audit: type=1327 audit(1707505503.165:295): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.142000 audit[4235]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd61e52971 a2=241 a3=1b6 items=1 ppid=4203 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.290086 kernel: audit: type=1300 audit(1707505503.142:294): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd61e52971 a2=241 a3=1b6 items=1 ppid=4203 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.290283 kernel: audit: type=1307 audit(1707505503.142:294): cwd="/etc/service/enabled/confd/log" Feb 9 19:05:03.142000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:05:03.142000 audit: PATH item=0 name="/dev/fd/63" inode=32520 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.311169 kernel: audit: type=1302 audit(1707505503.142:294): item=0 name="/dev/fd/63" inode=32520 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.326271 kernel: audit: type=1327 audit(1707505503.142:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.169000 audit[4250]: AVC avc: denied { write } for pid=4250 comm="tee" name="fd" dev="proc" ino=32557 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.169000 audit[4250]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea48da971 a2=241 a3=1b6 items=1 ppid=4205 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.169000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:05:03.169000 audit: PATH item=0 name="/dev/fd/63" inode=32538 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.210000 audit[4271]: AVC avc: denied { write } for pid=4271 comm="tee" name="fd" dev="proc" ino=32570 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.210000 audit[4271]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff688f4971 a2=241 a3=1b6 items=1 ppid=4224 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.210000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:05:03.210000 audit: PATH item=0 name="/dev/fd/63" inode=32567 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.255000 audit[4267]: AVC avc: denied { write } for pid=4267 comm="tee" name="fd" dev="proc" ino=32580 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.256000 audit[4269]: AVC avc: denied { write } for pid=4269 comm="tee" name="fd" dev="proc" ino=32583 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.256000 audit[4269]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe17004973 a2=241 a3=1b6 items=1 ppid=4225 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.256000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:05:03.256000 audit: PATH item=0 name="/dev/fd/63" inode=32564 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.255000 audit[4267]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc12dce972 a2=241 a3=1b6 items=1 ppid=4220 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.255000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:05:03.255000 audit: PATH item=0 name="/dev/fd/63" inode=32561 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.328000 audit[4279]: AVC avc: denied { write } for pid=4279 comm="tee" name="fd" dev="proc" ino=33169 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:05:03.328000 audit[4279]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf4ef8961 a2=241 a3=1b6 items=1 ppid=4221 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.328000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:05:03.328000 audit: PATH item=0 name="/dev/fd/63" inode=32588 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:05:03.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit: BPF prog-id=10 op=LOAD Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc9eb0ca0 a2=70 a3=7f542d31f000 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit: BPF prog-id=11 op=LOAD Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc9eb0ca0 a2=70 a3=6e items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcc9eb0c50 a2=70 a3=7ffcc9eb0ca0 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit: BPF prog-id=12 op=LOAD Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc9eb0c30 a2=70 a3=7ffcc9eb0ca0 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc9eb0d10 a2=70 a3=0 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc9eb0d00 a2=70 a3=0 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.725000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.725000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffcc9eb0d40 a2=70 a3=0 items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { perfmon } for pid=4341 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit[4341]: AVC avc: denied { bpf } for pid=4341 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.726000 audit: BPF prog-id=13 op=LOAD Feb 9 19:05:03.726000 audit[4341]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc9eb0c60 a2=70 a3=ffffffff items=0 ppid=4226 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:05:03.734000 audit[4343]: AVC avc: denied { bpf } for pid=4343 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.734000 audit[4343]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdf4399a00 a2=70 a3=fff80800 items=0 ppid=4226 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:05:03.734000 audit[4343]: AVC avc: denied { bpf } for pid=4343 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:05:03.734000 audit[4343]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdf43998d0 a2=70 a3=3 items=0 ppid=4226 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:05:03.742147 systemd-networkd[1556]: calico_tmp_B: Failed to manage SR-IOV PF and VF ports, ignoring: Invalid argument Feb 9 19:05:03.741000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:05:03.741000 audit[1511]: SYSCALL arch=c000003e syscall=262 success=no exit=-2 a0=ffffff9c a1=55a2baf16140 a2=7fffc9c216d0 a3=0 items=0 ppid=1 pid=1511 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.741000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:05:03.923000 audit[4368]: NETFILTER_CFG table=mangle:119 family=2 entries=19 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:03.923000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fff72720980 a2=0 a3=7fff7272096c items=0 ppid=4226 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.923000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:03.935000 audit[4367]: NETFILTER_CFG table=nat:120 family=2 entries=16 op=nft_register_chain pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:03.935000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffe50ac28b0 a2=0 a3=5647b22e4000 items=0 ppid=4226 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.935000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:03.940000 audit[4372]: NETFILTER_CFG table=filter:121 family=2 entries=39 op=nft_register_chain pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:03.940000 audit[4372]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffeeec04e70 a2=0 a3=55e0620d1000 items=0 ppid=4226 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:03.950000 audit[4366]: NETFILTER_CFG table=raw:122 family=2 entries=19 op=nft_register_chain pid=4366 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:03.950000 audit[4366]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffcde6e67d0 a2=0 a3=55b49c443000 items=0 ppid=4226 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:03.950000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:04.638256 systemd-networkd[1556]: vxlan.calico: Link UP Feb 9 19:05:04.638268 systemd-networkd[1556]: vxlan.calico: Gained carrier Feb 9 19:05:05.372084 env[1412]: time="2024-02-09T19:05:05.371870059Z" level=info msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" Feb 9 19:05:05.427578 kubelet[2584]: I0209 19:05:05.426872 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jwtqh" podStartSLOduration=-9.223372010427969e+09 pod.CreationTimestamp="2024-02-09 19:04:39 +0000 UTC" firstStartedPulling="2024-02-09 19:04:40.5307007 +0000 UTC m=+42.414699312" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:01.647222075 +0000 UTC m=+63.531220687" watchObservedRunningTime="2024-02-09 19:05:05.426807326 +0000 UTC m=+67.310806038" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.426 [INFO][4396] k8s.go 578: Cleaning up netns ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.428 [INFO][4396] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" iface="eth0" netns="/var/run/netns/cni-b8b7d4aa-f02e-29f6-499a-d218dbb69283" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.428 [INFO][4396] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" iface="eth0" netns="/var/run/netns/cni-b8b7d4aa-f02e-29f6-499a-d218dbb69283" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.429 [INFO][4396] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" iface="eth0" netns="/var/run/netns/cni-b8b7d4aa-f02e-29f6-499a-d218dbb69283" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.429 [INFO][4396] k8s.go 585: Releasing IP address(es) ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.429 [INFO][4396] utils.go 188: Calico CNI releasing IP address ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.458 [INFO][4402] ipam_plugin.go 415: Releasing address using handleID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.458 [INFO][4402] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.458 [INFO][4402] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.465 [WARNING][4402] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.465 [INFO][4402] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.467 [INFO][4402] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:05.469681 env[1412]: 2024-02-09 19:05:05.468 [INFO][4396] k8s.go 591: Teardown processing complete. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:05.470620 env[1412]: time="2024-02-09T19:05:05.470568539Z" level=info msg="TearDown network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" successfully" Feb 9 19:05:05.470750 env[1412]: time="2024-02-09T19:05:05.470731640Z" level=info msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" returns successfully" Feb 9 19:05:05.474636 systemd[1]: run-netns-cni\x2db8b7d4aa\x2df02e\x2d29f6\x2d499a\x2dd218dbb69283.mount: Deactivated successfully. Feb 9 19:05:05.477167 env[1412]: time="2024-02-09T19:05:05.475269962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2kjb,Uid:b139bbd0-9b20-41a8-9896-7f2a7ac77265,Namespace:calico-system,Attempt:1,}" Feb 9 19:05:05.646562 systemd-networkd[1556]: calib2a10f52b51: Link UP Feb 9 19:05:05.660392 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:05:05.660574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib2a10f52b51: link becomes ready Feb 9 19:05:05.661307 systemd-networkd[1556]: calib2a10f52b51: Gained carrier Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.565 [INFO][4409] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0 csi-node-driver- calico-system b139bbd0-9b20-41a8-9896-7f2a7ac77265 818 0 2024-02-09 19:04:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d csi-node-driver-r2kjb eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib2a10f52b51 [] []}} ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.565 [INFO][4409] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.596 [INFO][4420] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" HandleID="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.607 [INFO][4420] ipam_plugin.go 268: Auto assigning IP ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" HandleID="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"csi-node-driver-r2kjb", "timestamp":"2024-02-09 19:05:05.596956654 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.607 [INFO][4420] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.607 [INFO][4420] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.607 [INFO][4420] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.608 [INFO][4420] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.618 [INFO][4420] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.622 [INFO][4420] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.623 [INFO][4420] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.627 [INFO][4420] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.627 [INFO][4420] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.628 [INFO][4420] ipam.go 1682: Creating new handle: k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2 Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.632 [INFO][4420] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.638 [INFO][4420] ipam.go 1216: Successfully claimed IPs: [192.168.88.65/26] block=192.168.88.64/26 handle="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.638 [INFO][4420] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.65/26] handle="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.639 [INFO][4420] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:05.685062 env[1412]: 2024-02-09 19:05:05.639 [INFO][4420] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.65/26] IPv6=[] ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" HandleID="k8s-pod-network.c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.643 [INFO][4409] k8s.go 385: Populated endpoint ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b139bbd0-9b20-41a8-9896-7f2a7ac77265", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"csi-node-driver-r2kjb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib2a10f52b51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.643 [INFO][4409] k8s.go 386: Calico CNI using IPs: [192.168.88.65/32] ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.643 [INFO][4409] dataplane_linux.go 68: Setting the host side veth name to calib2a10f52b51 ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.667 [INFO][4409] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.667 [INFO][4409] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b139bbd0-9b20-41a8-9896-7f2a7ac77265", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2", Pod:"csi-node-driver-r2kjb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib2a10f52b51", MAC:"0a:c9:d9:4f:81:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:05.685879 env[1412]: 2024-02-09 19:05:05.678 [INFO][4409] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2" Namespace="calico-system" Pod="csi-node-driver-r2kjb" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:05.712515 env[1412]: time="2024-02-09T19:05:05.712414715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:05.712860 env[1412]: time="2024-02-09T19:05:05.712822417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:05.713084 env[1412]: time="2024-02-09T19:05:05.713008218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:05.713514 env[1412]: time="2024-02-09T19:05:05.713454920Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2 pid=4448 runtime=io.containerd.runc.v2 Feb 9 19:05:05.729000 audit[4456]: NETFILTER_CFG table=filter:123 family=2 entries=36 op=nft_register_chain pid=4456 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:05.729000 audit[4456]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffcc04fcd10 a2=0 a3=7ffcc04fccfc items=0 ppid=4226 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:05.729000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:05.787396 env[1412]: time="2024-02-09T19:05:05.787348179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2kjb,Uid:b139bbd0-9b20-41a8-9896-7f2a7ac77265,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2\"" Feb 9 19:05:05.791805 env[1412]: time="2024-02-09T19:05:05.791750001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:05:05.985291 systemd-networkd[1556]: vxlan.calico: Gained IPv6LL Feb 9 19:05:06.817361 systemd-networkd[1556]: calib2a10f52b51: Gained IPv6LL Feb 9 19:05:07.087331 update_engine[1376]: I0209 19:05:07.087172 1376 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:05:07.087878 update_engine[1376]: I0209 19:05:07.087499 1376 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:05:07.087878 update_engine[1376]: I0209 19:05:07.087833 1376 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:05:07.117357 update_engine[1376]: E0209 19:05:07.117302 1376 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117460 1376 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117472 1376 omaha_request_action.cc:621] Omaha request response: Feb 9 19:05:07.117613 update_engine[1376]: E0209 19:05:07.117561 1376 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117579 1376 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117583 1376 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117588 1376 update_attempter.cc:306] Processing Done. Feb 9 19:05:07.117613 update_engine[1376]: E0209 19:05:07.117607 1376 update_attempter.cc:619] Update failed. Feb 9 19:05:07.117613 update_engine[1376]: I0209 19:05:07.117614 1376 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117620 1376 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117625 1376 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117738 1376 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117766 1376 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117771 1376 omaha_request_action.cc:271] Request: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: Feb 9 19:05:07.117915 update_engine[1376]: I0209 19:05:07.117779 1376 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:05:07.118637 update_engine[1376]: I0209 19:05:07.117992 1376 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:05:07.118637 update_engine[1376]: I0209 19:05:07.118263 1376 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:05:07.118696 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 19:05:07.140374 update_engine[1376]: E0209 19:05:07.140300 1376 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140476 1376 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140491 1376 omaha_request_action.cc:621] Omaha request response: Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140501 1376 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140506 1376 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140511 1376 update_attempter.cc:306] Processing Done. Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140518 1376 update_attempter.cc:310] Error event sent. Feb 9 19:05:07.140600 update_engine[1376]: I0209 19:05:07.140532 1376 update_check_scheduler.cc:74] Next update check in 40m6s Feb 9 19:05:07.141120 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 19:05:07.372723 env[1412]: time="2024-02-09T19:05:07.372524903Z" level=info msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" Feb 9 19:05:07.374411 env[1412]: time="2024-02-09T19:05:07.374083310Z" level=info msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" Feb 9 19:05:07.375430 env[1412]: time="2024-02-09T19:05:07.374136811Z" level=info msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] k8s.go 578: Cleaning up netns ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" iface="eth0" netns="/var/run/netns/cni-3b9d4f20-9063-32e4-00c8-5ae591d22e2b" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" iface="eth0" netns="/var/run/netns/cni-3b9d4f20-9063-32e4-00c8-5ae591d22e2b" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" iface="eth0" netns="/var/run/netns/cni-3b9d4f20-9063-32e4-00c8-5ae591d22e2b" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] k8s.go 585: Releasing IP address(es) ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.508 [INFO][4533] utils.go 188: Calico CNI releasing IP address ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.566 [INFO][4543] ipam_plugin.go 415: Releasing address using handleID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.566 [INFO][4543] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.566 [INFO][4543] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.575 [WARNING][4543] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.575 [INFO][4543] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.578 [INFO][4543] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:07.581354 env[1412]: 2024-02-09 19:05:07.579 [INFO][4533] k8s.go 591: Teardown processing complete. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:07.588742 systemd[1]: run-netns-cni\x2d3b9d4f20\x2d9063\x2d32e4\x2d00c8\x2d5ae591d22e2b.mount: Deactivated successfully. Feb 9 19:05:07.592537 env[1412]: time="2024-02-09T19:05:07.592469552Z" level=info msg="TearDown network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" successfully" Feb 9 19:05:07.592723 env[1412]: time="2024-02-09T19:05:07.592698553Z" level=info msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" returns successfully" Feb 9 19:05:07.593839 env[1412]: time="2024-02-09T19:05:07.593801558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4fdbd88f-42mmn,Uid:6eccece5-9f29-4db5-bff8-1f226e4e1432,Namespace:calico-system,Attempt:1,}" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.527 [INFO][4522] k8s.go 578: Cleaning up netns ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.528 [INFO][4522] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" iface="eth0" netns="/var/run/netns/cni-62deb5bd-fd01-4af7-ebda-61960112652d" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.529 [INFO][4522] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" iface="eth0" netns="/var/run/netns/cni-62deb5bd-fd01-4af7-ebda-61960112652d" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.529 [INFO][4522] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" iface="eth0" netns="/var/run/netns/cni-62deb5bd-fd01-4af7-ebda-61960112652d" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.529 [INFO][4522] k8s.go 585: Releasing IP address(es) ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.529 [INFO][4522] utils.go 188: Calico CNI releasing IP address ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.598 [INFO][4548] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.598 [INFO][4548] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.598 [INFO][4548] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.609 [WARNING][4548] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.609 [INFO][4548] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.612 [INFO][4548] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:07.615291 env[1412]: 2024-02-09 19:05:07.614 [INFO][4522] k8s.go 591: Teardown processing complete. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:07.626475 env[1412]: time="2024-02-09T19:05:07.615475761Z" level=info msg="TearDown network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" successfully" Feb 9 19:05:07.626475 env[1412]: time="2024-02-09T19:05:07.615525462Z" level=info msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" returns successfully" Feb 9 19:05:07.626475 env[1412]: time="2024-02-09T19:05:07.616452266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-whh7p,Uid:74ec09d8-0957-412d-816d-b7f3528f1e43,Namespace:kube-system,Attempt:1,}" Feb 9 19:05:07.622965 systemd[1]: run-netns-cni\x2d62deb5bd\x2dfd01\x2d4af7\x2debda\x2d61960112652d.mount: Deactivated successfully. Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.530 [INFO][4520] k8s.go 578: Cleaning up netns ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.530 [INFO][4520] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" iface="eth0" netns="/var/run/netns/cni-d0b30a00-3dd0-b09d-af62-f00be0bf46e5" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.531 [INFO][4520] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" iface="eth0" netns="/var/run/netns/cni-d0b30a00-3dd0-b09d-af62-f00be0bf46e5" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.531 [INFO][4520] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" iface="eth0" netns="/var/run/netns/cni-d0b30a00-3dd0-b09d-af62-f00be0bf46e5" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.531 [INFO][4520] k8s.go 585: Releasing IP address(es) ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.531 [INFO][4520] utils.go 188: Calico CNI releasing IP address ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.613 [INFO][4549] ipam_plugin.go 415: Releasing address using handleID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.617 [INFO][4549] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.617 [INFO][4549] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.633 [WARNING][4549] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.633 [INFO][4549] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.636 [INFO][4549] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:07.639079 env[1412]: 2024-02-09 19:05:07.637 [INFO][4520] k8s.go 591: Teardown processing complete. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:07.644959 env[1412]: time="2024-02-09T19:05:07.639219374Z" level=info msg="TearDown network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" successfully" Feb 9 19:05:07.644959 env[1412]: time="2024-02-09T19:05:07.639271575Z" level=info msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" returns successfully" Feb 9 19:05:07.644959 env[1412]: time="2024-02-09T19:05:07.640014278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q7cfj,Uid:61c035ba-b6dd-472f-9488-d6ad43894181,Namespace:kube-system,Attempt:1,}" Feb 9 19:05:07.642584 systemd[1]: run-netns-cni\x2dd0b30a00\x2d3dd0\x2db09d\x2daf62\x2df00be0bf46e5.mount: Deactivated successfully. Feb 9 19:05:07.910476 systemd-networkd[1556]: cali02c43c15c8a: Link UP Feb 9 19:05:07.917241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:05:07.917356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali02c43c15c8a: link becomes ready Feb 9 19:05:07.924417 systemd-networkd[1556]: cali02c43c15c8a: Gained carrier Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.759 [INFO][4561] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0 calico-kube-controllers-b4fdbd88f- calico-system 6eccece5-9f29-4db5-bff8-1f226e4e1432 830 0 2024-02-09 19:04:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b4fdbd88f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d calico-kube-controllers-b4fdbd88f-42mmn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali02c43c15c8a [] []}} ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.760 [INFO][4561] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.850 [INFO][4597] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" HandleID="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.861 [INFO][4597] ipam_plugin.go 268: Auto assigning IP ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" HandleID="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"calico-kube-controllers-b4fdbd88f-42mmn", "timestamp":"2024-02-09 19:05:07.850611682 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.862 [INFO][4597] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.862 [INFO][4597] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.862 [INFO][4597] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.864 [INFO][4597] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.869 [INFO][4597] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.873 [INFO][4597] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.876 [INFO][4597] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.883 [INFO][4597] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.884 [INFO][4597] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.886 [INFO][4597] ipam.go 1682: Creating new handle: k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.892 [INFO][4597] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.902 [INFO][4597] ipam.go 1216: Successfully claimed IPs: [192.168.88.66/26] block=192.168.88.64/26 handle="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.902 [INFO][4597] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.66/26] handle="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.902 [INFO][4597] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:07.952327 env[1412]: 2024-02-09 19:05:07.902 [INFO][4597] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.66/26] IPv6=[] ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" HandleID="k8s-pod-network.b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.904 [INFO][4561] k8s.go 385: Populated endpoint ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0", GenerateName:"calico-kube-controllers-b4fdbd88f-", Namespace:"calico-system", SelfLink:"", UID:"6eccece5-9f29-4db5-bff8-1f226e4e1432", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4fdbd88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"calico-kube-controllers-b4fdbd88f-42mmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02c43c15c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.904 [INFO][4561] k8s.go 386: Calico CNI using IPs: [192.168.88.66/32] ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.904 [INFO][4561] dataplane_linux.go 68: Setting the host side veth name to cali02c43c15c8a ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.925 [INFO][4561] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.926 [INFO][4561] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0", GenerateName:"calico-kube-controllers-b4fdbd88f-", Namespace:"calico-system", SelfLink:"", UID:"6eccece5-9f29-4db5-bff8-1f226e4e1432", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4fdbd88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa", Pod:"calico-kube-controllers-b4fdbd88f-42mmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02c43c15c8a", MAC:"02:35:6b:01:83:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:07.953364 env[1412]: 2024-02-09 19:05:07.940 [INFO][4561] k8s.go 491: Wrote updated endpoint to datastore ContainerID="b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa" Namespace="calico-system" Pod="calico-kube-controllers-b4fdbd88f-42mmn" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:08.010000 audit[4632]: NETFILTER_CFG table=filter:124 family=2 entries=34 op=nft_register_chain pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:08.010000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=18320 a0=3 a1=7ffeb999b480 a2=0 a3=7ffeb999b46c items=0 ppid=4226 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.010000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:08.038764 systemd-networkd[1556]: cali0abc9869b55: Link UP Feb 9 19:05:08.051765 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0abc9869b55: link becomes ready Feb 9 19:05:08.050849 systemd-networkd[1556]: cali0abc9869b55: Gained carrier Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.797 [INFO][4572] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0 coredns-787d4945fb- kube-system 74ec09d8-0957-412d-816d-b7f3528f1e43 831 0 2024-02-09 19:04:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d coredns-787d4945fb-whh7p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0abc9869b55 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.798 [INFO][4572] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.959 [INFO][4605] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" HandleID="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.979 [INFO][4605] ipam_plugin.go 268: Auto assigning IP ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" HandleID="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291240), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"coredns-787d4945fb-whh7p", "timestamp":"2024-02-09 19:05:07.956448087 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.979 [INFO][4605] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.979 [INFO][4605] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.979 [INFO][4605] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.982 [INFO][4605] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.987 [INFO][4605] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.992 [INFO][4605] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.996 [INFO][4605] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.999 [INFO][4605] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:07.999 [INFO][4605] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.001 [INFO][4605] ipam.go 1682: Creating new handle: k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8 Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.008 [INFO][4605] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.019 [INFO][4605] ipam.go 1216: Successfully claimed IPs: [192.168.88.67/26] block=192.168.88.64/26 handle="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.019 [INFO][4605] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.67/26] handle="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.019 [INFO][4605] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:08.071919 env[1412]: 2024-02-09 19:05:08.019 [INFO][4605] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.67/26] IPv6=[] ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" HandleID="k8s-pod-network.fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.021 [INFO][4572] k8s.go 385: Populated endpoint ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"74ec09d8-0957-412d-816d-b7f3528f1e43", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"coredns-787d4945fb-whh7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc9869b55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.021 [INFO][4572] k8s.go 386: Calico CNI using IPs: [192.168.88.67/32] ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.022 [INFO][4572] dataplane_linux.go 68: Setting the host side veth name to cali0abc9869b55 ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.052 [INFO][4572] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.054 [INFO][4572] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"74ec09d8-0957-412d-816d-b7f3528f1e43", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8", Pod:"coredns-787d4945fb-whh7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc9869b55", MAC:"0e:df:ec:47:5a:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:08.074536 env[1412]: 2024-02-09 19:05:08.065 [INFO][4572] k8s.go 491: Wrote updated endpoint to datastore ContainerID="fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8" Namespace="kube-system" Pod="coredns-787d4945fb-whh7p" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:08.126079 systemd-networkd[1556]: calid35d551e350: Link UP Feb 9 19:05:08.126000 audit[4661]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:08.126000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=25136 a0=3 a1=7ffe9736a030 a2=0 a3=7ffe9736a01c items=0 ppid=4226 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.135334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid35d551e350: link becomes ready Feb 9 19:05:08.126000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:08.134279 systemd-networkd[1556]: calid35d551e350: Gained carrier Feb 9 19:05:08.159638 env[1412]: time="2024-02-09T19:05:08.144036075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:08.159638 env[1412]: time="2024-02-09T19:05:08.144133775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:08.159638 env[1412]: time="2024-02-09T19:05:08.144149575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:08.159638 env[1412]: time="2024-02-09T19:05:08.144471577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa pid=4653 runtime=io.containerd.runc.v2 Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:07.834 [INFO][4583] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0 coredns-787d4945fb- kube-system 61c035ba-b6dd-472f-9488-d6ad43894181 832 0 2024-02-09 19:04:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d coredns-787d4945fb-q7cfj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid35d551e350 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:07.834 [INFO][4583] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.021 [INFO][4610] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" HandleID="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.036 [INFO][4610] ipam_plugin.go 268: Auto assigning IP ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" HandleID="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027cb80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"coredns-787d4945fb-q7cfj", "timestamp":"2024-02-09 19:05:08.021524596 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.037 [INFO][4610] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.037 [INFO][4610] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.037 [INFO][4610] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.040 [INFO][4610] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.052 [INFO][4610] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.060 [INFO][4610] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.066 [INFO][4610] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.070 [INFO][4610] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.070 [INFO][4610] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.075 [INFO][4610] ipam.go 1682: Creating new handle: k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487 Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.083 [INFO][4610] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.096 [INFO][4610] ipam.go 1216: Successfully claimed IPs: [192.168.88.68/26] block=192.168.88.64/26 handle="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.096 [INFO][4610] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.68/26] handle="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.096 [INFO][4610] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:08.167082 env[1412]: 2024-02-09 19:05:08.096 [INFO][4610] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.68/26] IPv6=[] ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" HandleID="k8s-pod-network.0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.112 [INFO][4583] k8s.go 385: Populated endpoint ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"61c035ba-b6dd-472f-9488-d6ad43894181", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"coredns-787d4945fb-q7cfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35d551e350", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.112 [INFO][4583] k8s.go 386: Calico CNI using IPs: [192.168.88.68/32] ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.112 [INFO][4583] dataplane_linux.go 68: Setting the host side veth name to calid35d551e350 ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.139 [INFO][4583] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.140 [INFO][4583] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"61c035ba-b6dd-472f-9488-d6ad43894181", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487", Pod:"coredns-787d4945fb-q7cfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35d551e350", MAC:"d6:62:a3:ed:73:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:08.168627 env[1412]: 2024-02-09 19:05:08.160 [INFO][4583] k8s.go 491: Wrote updated endpoint to datastore ContainerID="0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487" Namespace="kube-system" Pod="coredns-787d4945fb-q7cfj" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:08.207416 kernel: kauditd_printk_skb: 119 callbacks suppressed Feb 9 19:05:08.207607 kernel: audit: type=1325 audit(1707505508.187:322): table=filter:126 family=2 entries=34 op=nft_register_chain pid=4680 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:08.187000 audit[4680]: NETFILTER_CFG table=filter:126 family=2 entries=34 op=nft_register_chain pid=4680 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:08.187000 audit[4680]: SYSCALL arch=c000003e syscall=46 success=yes exit=17884 a0=3 a1=7ffee8de2c00 a2=0 a3=7ffee8de2bec items=0 ppid=4226 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.244807 kernel: audit: type=1300 audit(1707505508.187:322): arch=c000003e syscall=46 success=yes exit=17884 a0=3 a1=7ffee8de2c00 a2=0 a3=7ffee8de2bec items=0 ppid=4226 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.244971 kernel: audit: type=1327 audit(1707505508.187:322): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:08.187000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:08.293031 env[1412]: time="2024-02-09T19:05:08.263292438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:08.293031 env[1412]: time="2024-02-09T19:05:08.263365838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:08.293031 env[1412]: time="2024-02-09T19:05:08.263382138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:08.293031 env[1412]: time="2024-02-09T19:05:08.263573339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8 pid=4697 runtime=io.containerd.runc.v2 Feb 9 19:05:08.386320 env[1412]: time="2024-02-09T19:05:08.386086918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:08.386320 env[1412]: time="2024-02-09T19:05:08.386132618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:08.386320 env[1412]: time="2024-02-09T19:05:08.386143618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:08.405664 env[1412]: time="2024-02-09T19:05:08.393409852Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487 pid=4733 runtime=io.containerd.runc.v2 Feb 9 19:05:08.408507 env[1412]: time="2024-02-09T19:05:08.408457923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4fdbd88f-42mmn,Uid:6eccece5-9f29-4db5-bff8-1f226e4e1432,Namespace:calico-system,Attempt:1,} returns sandbox id \"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa\"" Feb 9 19:05:08.408684 env[1412]: time="2024-02-09T19:05:08.408634424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-whh7p,Uid:74ec09d8-0957-412d-816d-b7f3528f1e43,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8\"" Feb 9 19:05:08.420509 env[1412]: time="2024-02-09T19:05:08.420353880Z" level=info msg="CreateContainer within sandbox \"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:05:08.480077 env[1412]: time="2024-02-09T19:05:08.479995261Z" level=info msg="CreateContainer within sandbox \"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbb4e58a5e204c820556e7b828ab7e50ae623637aff0ebc4cc32448d6f2dc52d\"" Feb 9 19:05:08.481570 env[1412]: time="2024-02-09T19:05:08.481514269Z" level=info msg="StartContainer for \"bbb4e58a5e204c820556e7b828ab7e50ae623637aff0ebc4cc32448d6f2dc52d\"" Feb 9 19:05:08.491398 env[1412]: time="2024-02-09T19:05:08.491337115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q7cfj,Uid:61c035ba-b6dd-472f-9488-d6ad43894181,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487\"" Feb 9 19:05:08.496531 env[1412]: time="2024-02-09T19:05:08.496469139Z" level=info msg="CreateContainer within sandbox \"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:05:08.556911 env[1412]: time="2024-02-09T19:05:08.556830824Z" level=info msg="StartContainer for \"bbb4e58a5e204c820556e7b828ab7e50ae623637aff0ebc4cc32448d6f2dc52d\" returns successfully" Feb 9 19:05:08.579554 env[1412]: time="2024-02-09T19:05:08.579476931Z" level=info msg="CreateContainer within sandbox \"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9661922324957dedce661be546dd9438d30c5a0b1ecff2bb1fbd7c3d81382599\"" Feb 9 19:05:08.594053 env[1412]: time="2024-02-09T19:05:08.591394187Z" level=info msg="StartContainer for \"9661922324957dedce661be546dd9438d30c5a0b1ecff2bb1fbd7c3d81382599\"" Feb 9 19:05:08.661583 kubelet[2584]: I0209 19:05:08.661532 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-whh7p" podStartSLOduration=58.661466818 pod.CreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:08.658653705 +0000 UTC m=+70.542652317" watchObservedRunningTime="2024-02-09 19:05:08.661466818 +0000 UTC m=+70.545465530" Feb 9 19:05:08.683588 systemd[1]: run-containerd-runc-k8s.io-9661922324957dedce661be546dd9438d30c5a0b1ecff2bb1fbd7c3d81382599-runc.XWUtOC.mount: Deactivated successfully. Feb 9 19:05:08.769299 env[1412]: time="2024-02-09T19:05:08.769211027Z" level=info msg="StartContainer for \"9661922324957dedce661be546dd9438d30c5a0b1ecff2bb1fbd7c3d81382599\" returns successfully" Feb 9 19:05:08.836152 kernel: audit: type=1325 audit(1707505508.822:323): table=filter:127 family=2 entries=12 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:08.822000 audit[4888]: NETFILTER_CFG table=filter:127 family=2 entries=12 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:08.822000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff1ee80f30 a2=0 a3=7fff1ee80f1c items=0 ppid=2744 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.861149 kernel: audit: type=1300 audit(1707505508.822:323): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff1ee80f30 a2=0 a3=7fff1ee80f1c items=0 ppid=2744 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:08.874205 kernel: audit: type=1327 audit(1707505508.822:323): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:08.823000 audit[4888]: NETFILTER_CFG table=nat:128 family=2 entries=30 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:08.888325 kernel: audit: type=1325 audit(1707505508.823:324): table=nat:128 family=2 entries=30 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:08.823000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fff1ee80f30 a2=0 a3=7fff1ee80f1c items=0 ppid=2744 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.930616 kernel: audit: type=1300 audit(1707505508.823:324): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fff1ee80f30 a2=0 a3=7fff1ee80f1c items=0 ppid=2744 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:08.930802 kernel: audit: type=1327 audit(1707505508.823:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:08.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:08.977527 env[1412]: time="2024-02-09T19:05:08.977351210Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:08.984446 env[1412]: time="2024-02-09T19:05:08.984381643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:08.989279 env[1412]: time="2024-02-09T19:05:08.989223866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:08.994996 env[1412]: time="2024-02-09T19:05:08.994946993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:08.995603 env[1412]: time="2024-02-09T19:05:08.995562496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:05:08.998419 env[1412]: time="2024-02-09T19:05:08.998381009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:05:09.001778 env[1412]: time="2024-02-09T19:05:09.000791121Z" level=info msg="CreateContainer within sandbox \"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:05:09.098963 env[1412]: time="2024-02-09T19:05:09.098871580Z" level=info msg="CreateContainer within sandbox \"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"798ad127fb5815d284fd20d86723643ac8aff99ff1281405d24686f01eee4115\"" Feb 9 19:05:09.100977 env[1412]: time="2024-02-09T19:05:09.099663383Z" level=info msg="StartContainer for \"798ad127fb5815d284fd20d86723643ac8aff99ff1281405d24686f01eee4115\"" Feb 9 19:05:09.178795 env[1412]: time="2024-02-09T19:05:09.178735853Z" level=info msg="StartContainer for \"798ad127fb5815d284fd20d86723643ac8aff99ff1281405d24686f01eee4115\" returns successfully" Feb 9 19:05:09.185290 systemd-networkd[1556]: cali0abc9869b55: Gained IPv6LL Feb 9 19:05:09.505325 systemd-networkd[1556]: calid35d551e350: Gained IPv6LL Feb 9 19:05:09.590743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102781733.mount: Deactivated successfully. Feb 9 19:05:09.633404 systemd-networkd[1556]: cali02c43c15c8a: Gained IPv6LL Feb 9 19:05:09.684720 kubelet[2584]: I0209 19:05:09.684673 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-q7cfj" podStartSLOduration=59.68460022 pod.CreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:09.668375744 +0000 UTC m=+71.552374356" watchObservedRunningTime="2024-02-09 19:05:09.68460022 +0000 UTC m=+71.568598832" Feb 9 19:05:09.749000 audit[4953]: NETFILTER_CFG table=filter:129 family=2 entries=12 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:09.762133 kernel: audit: type=1325 audit(1707505509.749:325): table=filter:129 family=2 entries=12 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:09.749000 audit[4953]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc232b3a50 a2=0 a3=7ffc232b3a3c items=0 ppid=2744 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:09.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:09.750000 audit[4953]: NETFILTER_CFG table=nat:130 family=2 entries=30 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:09.750000 audit[4953]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc232b3a50 a2=0 a3=7ffc232b3a3c items=0 ppid=2744 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:09.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:09.800000 audit[4979]: NETFILTER_CFG table=filter:131 family=2 entries=9 op=nft_register_rule pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:09.800000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff0a099aa0 a2=0 a3=7fff0a099a8c items=0 ppid=2744 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:09.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:09.820000 audit[4979]: NETFILTER_CFG table=nat:132 family=2 entries=63 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:09.820000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff0a099aa0 a2=0 a3=7fff0a099a8c items=0 ppid=2744 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:09.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:09.923883 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.56xoeD.mount: Deactivated successfully. Feb 9 19:05:13.753493 env[1412]: time="2024-02-09T19:05:13.753424390Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:13.761413 env[1412]: time="2024-02-09T19:05:13.761352626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:13.767651 env[1412]: time="2024-02-09T19:05:13.767599554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:13.773378 env[1412]: time="2024-02-09T19:05:13.773327080Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:13.775231 env[1412]: time="2024-02-09T19:05:13.775181489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:05:13.790093 env[1412]: time="2024-02-09T19:05:13.790034356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:05:13.807093 env[1412]: time="2024-02-09T19:05:13.807002932Z" level=info msg="CreateContainer within sandbox \"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:05:13.843850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570781637.mount: Deactivated successfully. Feb 9 19:05:13.847631 env[1412]: time="2024-02-09T19:05:13.847581516Z" level=info msg="CreateContainer within sandbox \"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828\"" Feb 9 19:05:13.848526 env[1412]: time="2024-02-09T19:05:13.848394319Z" level=info msg="StartContainer for \"c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828\"" Feb 9 19:05:13.940656 env[1412]: time="2024-02-09T19:05:13.940590536Z" level=info msg="StartContainer for \"c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828\" returns successfully" Feb 9 19:05:14.742477 kubelet[2584]: I0209 19:05:14.742416 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b4fdbd88f-42mmn" podStartSLOduration=-9.223371978112423e+09 pod.CreationTimestamp="2024-02-09 19:04:16 +0000 UTC" firstStartedPulling="2024-02-09 19:05:08.411126636 +0000 UTC m=+70.295125248" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:14.685474776 +0000 UTC m=+76.569473388" watchObservedRunningTime="2024-02-09 19:05:14.742353531 +0000 UTC m=+76.626352143" Feb 9 19:05:15.595481 kubelet[2584]: I0209 19:05:15.595428 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:05:15.605417 kubelet[2584]: W0209 19:05:15.605376 2584 reflector.go:424] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-00ed68a33d" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-00ed68a33d' and this object Feb 9 19:05:15.605417 kubelet[2584]: E0209 19:05:15.605422 2584 reflector.go:140] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-00ed68a33d" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-00ed68a33d' and this object Feb 9 19:05:15.605652 kubelet[2584]: W0209 19:05:15.605467 2584 reflector.go:424] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510.3.2-a-00ed68a33d" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-00ed68a33d' and this object Feb 9 19:05:15.605652 kubelet[2584]: E0209 19:05:15.605481 2584 reflector.go:140] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510.3.2-a-00ed68a33d" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-00ed68a33d' and this object Feb 9 19:05:15.617252 kubelet[2584]: I0209 19:05:15.617217 2584 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:05:15.622937 kubelet[2584]: I0209 19:05:15.622910 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc096c8b-064d-4912-92c7-fd69959ca2d6-calico-apiserver-certs\") pod \"calico-apiserver-567886577-8jj9h\" (UID: \"cc096c8b-064d-4912-92c7-fd69959ca2d6\") " pod="calico-apiserver/calico-apiserver-567886577-8jj9h" Feb 9 19:05:15.623103 kubelet[2584]: I0209 19:05:15.622984 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdvd\" (UniqueName: \"kubernetes.io/projected/cc096c8b-064d-4912-92c7-fd69959ca2d6-kube-api-access-bkdvd\") pod \"calico-apiserver-567886577-8jj9h\" (UID: \"cc096c8b-064d-4912-92c7-fd69959ca2d6\") " pod="calico-apiserver/calico-apiserver-567886577-8jj9h" Feb 9 19:05:15.725723 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 19:05:15.725922 kernel: audit: type=1325 audit(1707505515.708:329): table=filter:133 family=2 entries=6 op=nft_register_rule pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.708000 audit[5099]: NETFILTER_CFG table=filter:133 family=2 entries=6 op=nft_register_rule pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.726070 kubelet[2584]: I0209 19:05:15.724055 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twx8r\" (UniqueName: \"kubernetes.io/projected/d06ee260-8904-41d2-9ad2-9df5c9a99a0a-kube-api-access-twx8r\") pod \"calico-apiserver-567886577-jnkmg\" (UID: \"d06ee260-8904-41d2-9ad2-9df5c9a99a0a\") " pod="calico-apiserver/calico-apiserver-567886577-jnkmg" Feb 9 19:05:15.726070 kubelet[2584]: I0209 19:05:15.724186 2584 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d06ee260-8904-41d2-9ad2-9df5c9a99a0a-calico-apiserver-certs\") pod \"calico-apiserver-567886577-jnkmg\" (UID: \"d06ee260-8904-41d2-9ad2-9df5c9a99a0a\") " pod="calico-apiserver/calico-apiserver-567886577-jnkmg" Feb 9 19:05:15.708000 audit[5099]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffce39568a0 a2=0 a3=7ffce395688c items=0 ppid=2744 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.758212 kernel: audit: type=1300 audit(1707505515.708:329): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffce39568a0 a2=0 a3=7ffce395688c items=0 ppid=2744 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.758330 kernel: audit: type=1327 audit(1707505515.708:329): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.717000 audit[5099]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.781047 kernel: audit: type=1325 audit(1707505515.717:330): table=nat:134 family=2 entries=78 op=nft_register_rule pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.717000 audit[5099]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffce39568a0 a2=0 a3=7ffce395688c items=0 ppid=2744 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.810038 kernel: audit: type=1300 audit(1707505515.717:330): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffce39568a0 a2=0 a3=7ffce395688c items=0 ppid=2744 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.825490 kernel: audit: type=1327 audit(1707505515.717:330): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.887000 audit[5125]: NETFILTER_CFG table=filter:135 family=2 entries=7 op=nft_register_rule pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.887000 audit[5125]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd02216180 a2=0 a3=7ffd0221616c items=0 ppid=2744 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.920959 kernel: audit: type=1325 audit(1707505515.887:331): table=filter:135 family=2 entries=7 op=nft_register_rule pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.921128 kernel: audit: type=1300 audit(1707505515.887:331): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd02216180 a2=0 a3=7ffd0221616c items=0 ppid=2744 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.932052 kernel: audit: type=1327 audit(1707505515.887:331): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.889000 audit[5125]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:15.889000 audit[5125]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd02216180 a2=0 a3=7ffd0221616c items=0 ppid=2744 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:15.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:15.943053 kernel: audit: type=1325 audit(1707505515.889:332): table=nat:136 family=2 entries=78 op=nft_register_rule pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:16.363950 env[1412]: time="2024-02-09T19:05:16.363894736Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:16.373052 env[1412]: time="2024-02-09T19:05:16.372994676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:16.378677 env[1412]: time="2024-02-09T19:05:16.378644201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:16.390144 env[1412]: time="2024-02-09T19:05:16.387378639Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:16.390144 env[1412]: time="2024-02-09T19:05:16.387743841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:05:16.391475 env[1412]: time="2024-02-09T19:05:16.391122156Z" level=info msg="CreateContainer within sandbox \"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:05:16.442994 env[1412]: time="2024-02-09T19:05:16.442931384Z" level=info msg="CreateContainer within sandbox \"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b9ba77618e6f525fd960b2960a2d83bd6ef12a431e808ea1f5226c0ed547824\"" Feb 9 19:05:16.443688 env[1412]: time="2024-02-09T19:05:16.443632388Z" level=info msg="StartContainer for \"0b9ba77618e6f525fd960b2960a2d83bd6ef12a431e808ea1f5226c0ed547824\"" Feb 9 19:05:16.484964 systemd[1]: run-containerd-runc-k8s.io-0b9ba77618e6f525fd960b2960a2d83bd6ef12a431e808ea1f5226c0ed547824-runc.sIO8nS.mount: Deactivated successfully. Feb 9 19:05:16.545648 env[1412]: time="2024-02-09T19:05:16.545595437Z" level=info msg="StartContainer for \"0b9ba77618e6f525fd960b2960a2d83bd6ef12a431e808ea1f5226c0ed547824\" returns successfully" Feb 9 19:05:16.725384 kubelet[2584]: E0209 19:05:16.725226 2584 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 19:05:16.726297 kubelet[2584]: E0209 19:05:16.726228 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc096c8b-064d-4912-92c7-fd69959ca2d6-calico-apiserver-certs podName:cc096c8b-064d-4912-92c7-fd69959ca2d6 nodeName:}" failed. No retries permitted until 2024-02-09 19:05:17.226187534 +0000 UTC m=+79.110186146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/cc096c8b-064d-4912-92c7-fd69959ca2d6-calico-apiserver-certs") pod "calico-apiserver-567886577-8jj9h" (UID: "cc096c8b-064d-4912-92c7-fd69959ca2d6") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:05:16.825200 kubelet[2584]: E0209 19:05:16.825152 2584 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 19:05:16.825485 kubelet[2584]: E0209 19:05:16.825295 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d06ee260-8904-41d2-9ad2-9df5c9a99a0a-calico-apiserver-certs podName:d06ee260-8904-41d2-9ad2-9df5c9a99a0a nodeName:}" failed. No retries permitted until 2024-02-09 19:05:17.325263371 +0000 UTC m=+79.209261983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d06ee260-8904-41d2-9ad2-9df5c9a99a0a-calico-apiserver-certs") pod "calico-apiserver-567886577-jnkmg" (UID: "d06ee260-8904-41d2-9ad2-9df5c9a99a0a") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:05:16.895683 kubelet[2584]: I0209 19:05:16.895633 2584 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:05:16.895683 kubelet[2584]: I0209 19:05:16.895691 2584 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:05:17.404196 env[1412]: time="2024-02-09T19:05:17.403625108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567886577-8jj9h,Uid:cc096c8b-064d-4912-92c7-fd69959ca2d6,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:05:17.424947 env[1412]: time="2024-02-09T19:05:17.424897501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567886577-jnkmg,Uid:d06ee260-8904-41d2-9ad2-9df5c9a99a0a,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:05:17.685764 systemd-networkd[1556]: calif8a877b4b0a: Link UP Feb 9 19:05:17.697998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:05:17.698204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif8a877b4b0a: link becomes ready Feb 9 19:05:17.704864 systemd-networkd[1556]: calif8a877b4b0a: Gained carrier Feb 9 19:05:17.721752 kubelet[2584]: I0209 19:05:17.721417 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-r2kjb" podStartSLOduration=-9.223371975133438e+09 pod.CreationTimestamp="2024-02-09 19:04:16 +0000 UTC" firstStartedPulling="2024-02-09 19:05:05.789283189 +0000 UTC m=+67.673281801" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:16.69816731 +0000 UTC m=+78.582165922" watchObservedRunningTime="2024-02-09 19:05:17.721336999 +0000 UTC m=+79.605335711" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.555 [INFO][5164] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0 calico-apiserver-567886577- calico-apiserver cc096c8b-064d-4912-92c7-fd69959ca2d6 938 0 2024-02-09 19:05:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567886577 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d calico-apiserver-567886577-8jj9h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8a877b4b0a [] []}} ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.556 [INFO][5164] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.615 [INFO][5188] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" HandleID="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.634 [INFO][5188] ipam_plugin.go 268: Auto assigning IP ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" HandleID="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f1240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"calico-apiserver-567886577-8jj9h", "timestamp":"2024-02-09 19:05:17.615925137 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.634 [INFO][5188] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.634 [INFO][5188] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.634 [INFO][5188] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.636 [INFO][5188] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.640 [INFO][5188] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.644 [INFO][5188] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.646 [INFO][5188] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.648 [INFO][5188] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.648 [INFO][5188] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.650 [INFO][5188] ipam.go 1682: Creating new handle: k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.656 [INFO][5188] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.664 [INFO][5188] ipam.go 1216: Successfully claimed IPs: [192.168.88.69/26] block=192.168.88.64/26 handle="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.664 [INFO][5188] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.69/26] handle="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.664 [INFO][5188] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:17.724010 env[1412]: 2024-02-09 19:05:17.664 [INFO][5188] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.69/26] IPv6=[] ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" HandleID="k8s-pod-network.decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.666 [INFO][5164] k8s.go 385: Populated endpoint ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0", GenerateName:"calico-apiserver-567886577-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc096c8b-064d-4912-92c7-fd69959ca2d6", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567886577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"calico-apiserver-567886577-8jj9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a877b4b0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.667 [INFO][5164] k8s.go 386: Calico CNI using IPs: [192.168.88.69/32] ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.667 [INFO][5164] dataplane_linux.go 68: Setting the host side veth name to calif8a877b4b0a ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.706 [INFO][5164] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.707 [INFO][5164] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0", GenerateName:"calico-apiserver-567886577-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc096c8b-064d-4912-92c7-fd69959ca2d6", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567886577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd", Pod:"calico-apiserver-567886577-8jj9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a877b4b0a", MAC:"12:39:fe:50:04:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:17.724866 env[1412]: 2024-02-09 19:05:17.722 [INFO][5164] k8s.go 491: Wrote updated endpoint to datastore ContainerID="decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-8jj9h" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--8jj9h-eth0" Feb 9 19:05:17.776823 env[1412]: time="2024-02-09T19:05:17.776727541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:17.777143 env[1412]: time="2024-02-09T19:05:17.777098443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:17.777299 env[1412]: time="2024-02-09T19:05:17.777268943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:17.777616 env[1412]: time="2024-02-09T19:05:17.777579445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd pid=5227 runtime=io.containerd.runc.v2 Feb 9 19:05:17.781122 systemd-networkd[1556]: cali22a1a26c267: Link UP Feb 9 19:05:17.789265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22a1a26c267: link becomes ready Feb 9 19:05:17.789036 systemd-networkd[1556]: cali22a1a26c267: Gained carrier Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.594 [INFO][5174] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0 calico-apiserver-567886577- calico-apiserver d06ee260-8904-41d2-9ad2-9df5c9a99a0a 942 0 2024-02-09 19:05:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567886577 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-00ed68a33d calico-apiserver-567886577-jnkmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22a1a26c267 [] []}} ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.594 [INFO][5174] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.689 [INFO][5195] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" HandleID="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.714 [INFO][5195] ipam_plugin.go 268: Auto assigning IP ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" HandleID="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003259f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-00ed68a33d", "pod":"calico-apiserver-567886577-jnkmg", "timestamp":"2024-02-09 19:05:17.689912661 +0000 UTC"}, Hostname:"ci-3510.3.2-a-00ed68a33d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.714 [INFO][5195] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.714 [INFO][5195] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.714 [INFO][5195] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-00ed68a33d' Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.716 [INFO][5195] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.729 [INFO][5195] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.733 [INFO][5195] ipam.go 489: Trying affinity for 192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.753 [INFO][5195] ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.756 [INFO][5195] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.756 [INFO][5195] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.757 [INFO][5195] ipam.go 1682: Creating new handle: k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.762 [INFO][5195] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.770 [INFO][5195] ipam.go 1216: Successfully claimed IPs: [192.168.88.70/26] block=192.168.88.64/26 handle="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.770 [INFO][5195] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.70/26] handle="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" host="ci-3510.3.2-a-00ed68a33d" Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.770 [INFO][5195] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:17.826150 env[1412]: 2024-02-09 19:05:17.770 [INFO][5195] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.70/26] IPv6=[] ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" HandleID="k8s-pod-network.22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.773 [INFO][5174] k8s.go 385: Populated endpoint ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0", GenerateName:"calico-apiserver-567886577-", Namespace:"calico-apiserver", SelfLink:"", UID:"d06ee260-8904-41d2-9ad2-9df5c9a99a0a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567886577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"", Pod:"calico-apiserver-567886577-jnkmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22a1a26c267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.773 [INFO][5174] k8s.go 386: Calico CNI using IPs: [192.168.88.70/32] ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.773 [INFO][5174] dataplane_linux.go 68: Setting the host side veth name to cali22a1a26c267 ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.789 [INFO][5174] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.790 [INFO][5174] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0", GenerateName:"calico-apiserver-567886577-", Namespace:"calico-apiserver", SelfLink:"", UID:"d06ee260-8904-41d2-9ad2-9df5c9a99a0a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567886577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c", Pod:"calico-apiserver-567886577-jnkmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22a1a26c267", MAC:"92:7c:70:ff:3a:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:17.827178 env[1412]: 2024-02-09 19:05:17.822 [INFO][5174] k8s.go 491: Wrote updated endpoint to datastore ContainerID="22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c" Namespace="calico-apiserver" Pod="calico-apiserver-567886577-jnkmg" WorkloadEndpoint="ci--3510.3.2--a--00ed68a33d-k8s-calico--apiserver--567886577--jnkmg-eth0" Feb 9 19:05:17.829000 audit[5243]: NETFILTER_CFG table=filter:137 family=2 entries=55 op=nft_register_chain pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:17.829000 audit[5243]: SYSCALL arch=c000003e syscall=46 success=yes exit=28088 a0=3 a1=7ffedbf6e770 a2=0 a3=7ffedbf6e75c items=0 ppid=4226 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:17.829000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:17.889000 audit[5264]: NETFILTER_CFG table=filter:138 family=2 entries=46 op=nft_register_chain pid=5264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:05:17.889000 audit[5264]: SYSCALL arch=c000003e syscall=46 success=yes exit=23292 a0=3 a1=7ffe220f4c60 a2=0 a3=7ffe220f4c4c items=0 ppid=4226 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:17.889000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:05:17.905560 env[1412]: time="2024-02-09T19:05:17.905434304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:17.905560 env[1412]: time="2024-02-09T19:05:17.905513605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:17.905560 env[1412]: time="2024-02-09T19:05:17.905533305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:17.909046 env[1412]: time="2024-02-09T19:05:17.906162308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c pid=5270 runtime=io.containerd.runc.v2 Feb 9 19:05:17.992127 env[1412]: time="2024-02-09T19:05:17.991936483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567886577-8jj9h,Uid:cc096c8b-064d-4912-92c7-fd69959ca2d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd\"" Feb 9 19:05:17.998588 env[1412]: time="2024-02-09T19:05:17.998536712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:05:18.043503 env[1412]: time="2024-02-09T19:05:18.043457607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567886577-jnkmg,Uid:d06ee260-8904-41d2-9ad2-9df5c9a99a0a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c\"" Feb 9 19:05:19.169421 systemd-networkd[1556]: calif8a877b4b0a: Gained IPv6LL Feb 9 19:05:19.169857 systemd-networkd[1556]: cali22a1a26c267: Gained IPv6LL Feb 9 19:05:21.119811 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.3d4H36.mount: Deactivated successfully. Feb 9 19:05:23.229614 env[1412]: time="2024-02-09T19:05:23.229543095Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.241462 env[1412]: time="2024-02-09T19:05:23.241387345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.246241 env[1412]: time="2024-02-09T19:05:23.246179965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.251174 env[1412]: time="2024-02-09T19:05:23.251123286Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.251954 env[1412]: time="2024-02-09T19:05:23.251911189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:05:23.253593 env[1412]: time="2024-02-09T19:05:23.253550096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:05:23.257493 env[1412]: time="2024-02-09T19:05:23.257430912Z" level=info msg="CreateContainer within sandbox \"decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:05:23.311826 env[1412]: time="2024-02-09T19:05:23.311737140Z" level=info msg="CreateContainer within sandbox \"decd976cd4f042e81dbab77b09dfb4cfe711baf665d76a969fc901278805a6bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72\"" Feb 9 19:05:23.313032 env[1412]: time="2024-02-09T19:05:23.312963745Z" level=info msg="StartContainer for \"770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72\"" Feb 9 19:05:23.376064 systemd[1]: run-containerd-runc-k8s.io-770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72-runc.DlbUxM.mount: Deactivated successfully. Feb 9 19:05:23.453222 env[1412]: time="2024-02-09T19:05:23.453156033Z" level=info msg="StartContainer for \"770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72\" returns successfully" Feb 9 19:05:23.732719 kubelet[2584]: I0209 19:05:23.732664 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567886577-8jj9h" podStartSLOduration=-9.223372028122164e+09 pod.CreationTimestamp="2024-02-09 19:05:15 +0000 UTC" firstStartedPulling="2024-02-09 19:05:17.997667808 +0000 UTC m=+79.881666420" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:23.721272258 +0000 UTC m=+85.605270970" watchObservedRunningTime="2024-02-09 19:05:23.732612706 +0000 UTC m=+85.616611318" Feb 9 19:05:23.780436 env[1412]: time="2024-02-09T19:05:23.780368006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.792664 env[1412]: time="2024-02-09T19:05:23.792598557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.798639 env[1412]: time="2024-02-09T19:05:23.798583483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.806520 env[1412]: time="2024-02-09T19:05:23.806464116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:23.807793 env[1412]: time="2024-02-09T19:05:23.807738021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:05:23.811570 env[1412]: time="2024-02-09T19:05:23.811530737Z" level=info msg="CreateContainer within sandbox \"22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:05:23.848465 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:05:23.848704 kernel: audit: type=1325 audit(1707505523.828:335): table=filter:139 family=2 entries=8 op=nft_register_rule pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:23.828000 audit[5397]: NETFILTER_CFG table=filter:139 family=2 entries=8 op=nft_register_rule pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:23.828000 audit[5397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdf9a7a000 a2=0 a3=7ffdf9a79fec items=0 ppid=2744 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.886136 kernel: audit: type=1300 audit(1707505523.828:335): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdf9a7a000 a2=0 a3=7ffdf9a79fec items=0 ppid=2744 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:23.901043 kernel: audit: type=1327 audit(1707505523.828:335): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:23.869000 audit[5397]: NETFILTER_CFG table=nat:140 family=2 entries=78 op=nft_register_rule pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:23.914048 kernel: audit: type=1325 audit(1707505523.869:336): table=nat:140 family=2 entries=78 op=nft_register_rule pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:23.869000 audit[5397]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffdf9a7a000 a2=0 a3=7ffdf9a79fec items=0 ppid=2744 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.915340 env[1412]: time="2024-02-09T19:05:23.915282672Z" level=info msg="CreateContainer within sandbox \"22e205c13caaf646071caf58268de3fed3fa4c9f0b5c69bfda240c3129ba222c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924\"" Feb 9 19:05:23.935450 env[1412]: time="2024-02-09T19:05:23.935382257Z" level=info msg="StartContainer for \"6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924\"" Feb 9 19:05:23.938083 kernel: audit: type=1300 audit(1707505523.869:336): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffdf9a7a000 a2=0 a3=7ffdf9a79fec items=0 ppid=2744 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:23.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:23.972112 kernel: audit: type=1327 audit(1707505523.869:336): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:24.128944 env[1412]: time="2024-02-09T19:05:24.128864065Z" level=info msg="StartContainer for \"6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924\" returns successfully" Feb 9 19:05:24.291369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614419609.mount: Deactivated successfully. Feb 9 19:05:24.817000 audit[5467]: NETFILTER_CFG table=filter:141 family=2 entries=8 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:24.817000 audit[5467]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd1d4e9a90 a2=0 a3=7ffd1d4e9a7c items=0 ppid=2744 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:24.854363 kernel: audit: type=1325 audit(1707505524.817:337): table=filter:141 family=2 entries=8 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:24.854623 kernel: audit: type=1300 audit(1707505524.817:337): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd1d4e9a90 a2=0 a3=7ffd1d4e9a7c items=0 ppid=2744 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:24.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:24.868411 kernel: audit: type=1327 audit(1707505524.817:337): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:24.819000 audit[5467]: NETFILTER_CFG table=nat:142 family=2 entries=78 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:24.881193 kernel: audit: type=1325 audit(1707505524.819:338): table=nat:142 family=2 entries=78 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:24.819000 audit[5467]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd1d4e9a90 a2=0 a3=7ffd1d4e9a7c items=0 ppid=2744 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:24.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:39.930573 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.K6KRRp.mount: Deactivated successfully. Feb 9 19:05:47.431133 systemd[1]: run-containerd-runc-k8s.io-770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72-runc.O1Bj9v.mount: Deactivated successfully. Feb 9 19:05:47.486544 systemd[1]: run-containerd-runc-k8s.io-6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924-runc.jjaODK.mount: Deactivated successfully. Feb 9 19:05:47.508930 kubelet[2584]: I0209 19:05:47.508882 2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567886577-jnkmg" podStartSLOduration=-9.223372004345957e+09 pod.CreationTimestamp="2024-02-09 19:05:15 +0000 UTC" firstStartedPulling="2024-02-09 19:05:18.045245115 +0000 UTC m=+79.929243727" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:24.724447647 +0000 UTC m=+86.608446359" watchObservedRunningTime="2024-02-09 19:05:47.508818544 +0000 UTC m=+109.392817156" Feb 9 19:05:47.614000 audit[5568]: NETFILTER_CFG table=filter:143 family=2 entries=7 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.620717 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:05:47.620886 kernel: audit: type=1325 audit(1707505547.614:339): table=filter:143 family=2 entries=7 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.614000 audit[5568]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdae2a0bf0 a2=0 a3=7ffdae2a0bdc items=0 ppid=2744 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.652047 kernel: audit: type=1300 audit(1707505547.614:339): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdae2a0bf0 a2=0 a3=7ffdae2a0bdc items=0 ppid=2744 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.652260 kernel: audit: type=1327 audit(1707505547.614:339): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.617000 audit[5568]: NETFILTER_CFG table=nat:144 family=2 entries=85 op=nft_register_chain pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.676869 kernel: audit: type=1325 audit(1707505547.617:340): table=nat:144 family=2 entries=85 op=nft_register_chain pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.617000 audit[5568]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffdae2a0bf0 a2=0 a3=7ffdae2a0bdc items=0 ppid=2744 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.710379 kernel: audit: type=1300 audit(1707505547.617:340): arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffdae2a0bf0 a2=0 a3=7ffdae2a0bdc items=0 ppid=2744 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.710573 kernel: audit: type=1327 audit(1707505547.617:340): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.699000 audit[5594]: NETFILTER_CFG table=filter:145 family=2 entries=6 op=nft_register_rule pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.699000 audit[5594]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe675c8870 a2=0 a3=7ffe675c885c items=0 ppid=2744 pid=5594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.745367 kernel: audit: type=1325 audit(1707505547.699:341): table=filter:145 family=2 entries=6 op=nft_register_rule pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.745617 kernel: audit: type=1300 audit(1707505547.699:341): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe675c8870 a2=0 a3=7ffe675c885c items=0 ppid=2744 pid=5594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.756495 kernel: audit: type=1327 audit(1707505547.699:341): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.714000 audit[5594]: NETFILTER_CFG table=nat:146 family=2 entries=92 op=nft_register_chain pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:47.714000 audit[5594]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe675c8870 a2=0 a3=7ffe675c885c items=0 ppid=2744 pid=5594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:05:47.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:05:47.769051 kernel: audit: type=1325 audit(1707505547.714:342): table=nat:146 family=2 entries=92 op=nft_register_chain pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:05:58.881938 env[1412]: time="2024-02-09T19:05:58.881804436Z" level=info msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.933 [WARNING][5635] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0", GenerateName:"calico-kube-controllers-b4fdbd88f-", Namespace:"calico-system", SelfLink:"", UID:"6eccece5-9f29-4db5-bff8-1f226e4e1432", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4fdbd88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa", Pod:"calico-kube-controllers-b4fdbd88f-42mmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02c43c15c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.934 [INFO][5635] k8s.go 578: Cleaning up netns ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.934 [INFO][5635] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" iface="eth0" netns="" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.934 [INFO][5635] k8s.go 585: Releasing IP address(es) ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.934 [INFO][5635] utils.go 188: Calico CNI releasing IP address ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.953 [INFO][5641] ipam_plugin.go 415: Releasing address using handleID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.953 [INFO][5641] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.953 [INFO][5641] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.959 [WARNING][5641] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.960 [INFO][5641] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.961 [INFO][5641] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:58.963755 env[1412]: 2024-02-09 19:05:58.962 [INFO][5635] k8s.go 591: Teardown processing complete. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:58.964586 env[1412]: time="2024-02-09T19:05:58.963800087Z" level=info msg="TearDown network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" successfully" Feb 9 19:05:58.964586 env[1412]: time="2024-02-09T19:05:58.963849688Z" level=info msg="StopPodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" returns successfully" Feb 9 19:05:58.964586 env[1412]: time="2024-02-09T19:05:58.964416207Z" level=info msg="RemovePodSandbox for \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" Feb 9 19:05:58.964586 env[1412]: time="2024-02-09T19:05:58.964466308Z" level=info msg="Forcibly stopping sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\"" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:58.999 [WARNING][5659] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0", GenerateName:"calico-kube-controllers-b4fdbd88f-", Namespace:"calico-system", SelfLink:"", UID:"6eccece5-9f29-4db5-bff8-1f226e4e1432", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4fdbd88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"b687478052c72242c47929efe370c0846f52d28f1ab46af842c47f4b3c99e1fa", Pod:"calico-kube-controllers-b4fdbd88f-42mmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02c43c15c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.000 [INFO][5659] k8s.go 578: Cleaning up netns ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.000 [INFO][5659] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" iface="eth0" netns="" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.000 [INFO][5659] k8s.go 585: Releasing IP address(es) ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.000 [INFO][5659] utils.go 188: Calico CNI releasing IP address ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.024 [INFO][5665] ipam_plugin.go 415: Releasing address using handleID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.024 [INFO][5665] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.024 [INFO][5665] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.032 [WARNING][5665] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.032 [INFO][5665] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" HandleID="k8s-pod-network.a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Workload="ci--3510.3.2--a--00ed68a33d-k8s-calico--kube--controllers--b4fdbd88f--42mmn-eth0" Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.033 [INFO][5665] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.035750 env[1412]: 2024-02-09 19:05:59.034 [INFO][5659] k8s.go 591: Teardown processing complete. ContainerID="a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840" Feb 9 19:05:59.036551 env[1412]: time="2024-02-09T19:05:59.035799701Z" level=info msg="TearDown network for sandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" successfully" Feb 9 19:05:59.052154 env[1412]: time="2024-02-09T19:05:59.052002318Z" level=info msg="RemovePodSandbox \"a333596f70d632f2ca0f1dec08c6d48dad11fb36611fcd47a6aa4f3f7ef8d840\" returns successfully" Feb 9 19:05:59.053144 env[1412]: time="2024-02-09T19:05:59.053113254Z" level=info msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.089 [WARNING][5684] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"61c035ba-b6dd-472f-9488-d6ad43894181", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487", Pod:"coredns-787d4945fb-q7cfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35d551e350", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.089 [INFO][5684] k8s.go 578: Cleaning up netns ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.090 [INFO][5684] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" iface="eth0" netns="" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.090 [INFO][5684] k8s.go 585: Releasing IP address(es) ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.090 [INFO][5684] utils.go 188: Calico CNI releasing IP address ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.110 [INFO][5690] ipam_plugin.go 415: Releasing address using handleID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.110 [INFO][5690] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.110 [INFO][5690] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.117 [WARNING][5690] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.117 [INFO][5690] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.118 [INFO][5690] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.120809 env[1412]: 2024-02-09 19:05:59.119 [INFO][5684] k8s.go 591: Teardown processing complete. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.121557 env[1412]: time="2024-02-09T19:05:59.120856117Z" level=info msg="TearDown network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" successfully" Feb 9 19:05:59.121557 env[1412]: time="2024-02-09T19:05:59.120904019Z" level=info msg="StopPodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" returns successfully" Feb 9 19:05:59.121645 env[1412]: time="2024-02-09T19:05:59.121590441Z" level=info msg="RemovePodSandbox for \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" Feb 9 19:05:59.121693 env[1412]: time="2024-02-09T19:05:59.121636642Z" level=info msg="Forcibly stopping sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\"" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.157 [WARNING][5709] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"61c035ba-b6dd-472f-9488-d6ad43894181", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"0a19cdb40a57486fd67afbdab9111ec5e2d9f9af17b2570009b54c2129f17487", Pod:"coredns-787d4945fb-q7cfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35d551e350", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.157 [INFO][5709] k8s.go 578: Cleaning up netns ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.157 [INFO][5709] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" iface="eth0" netns="" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.157 [INFO][5709] k8s.go 585: Releasing IP address(es) ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.157 [INFO][5709] utils.go 188: Calico CNI releasing IP address ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.178 [INFO][5715] ipam_plugin.go 415: Releasing address using handleID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.178 [INFO][5715] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.178 [INFO][5715] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.184 [WARNING][5715] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.184 [INFO][5715] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" HandleID="k8s-pod-network.c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--q7cfj-eth0" Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.186 [INFO][5715] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.192285 env[1412]: 2024-02-09 19:05:59.189 [INFO][5709] k8s.go 591: Teardown processing complete. ContainerID="c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b" Feb 9 19:05:59.192285 env[1412]: time="2024-02-09T19:05:59.192216396Z" level=info msg="TearDown network for sandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" successfully" Feb 9 19:05:59.205441 env[1412]: time="2024-02-09T19:05:59.204927702Z" level=info msg="RemovePodSandbox \"c0ef5075c8bb50b3a0c6abbb14b32bc26395d614d051807118e9532516466f4b\" returns successfully" Feb 9 19:05:59.206635 env[1412]: time="2024-02-09T19:05:59.206590155Z" level=info msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.257 [WARNING][5736] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"74ec09d8-0957-412d-816d-b7f3528f1e43", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8", Pod:"coredns-787d4945fb-whh7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc9869b55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.257 [INFO][5736] k8s.go 578: Cleaning up netns ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.258 [INFO][5736] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" iface="eth0" netns="" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.258 [INFO][5736] k8s.go 585: Releasing IP address(es) ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.258 [INFO][5736] utils.go 188: Calico CNI releasing IP address ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.284 [INFO][5742] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.284 [INFO][5742] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.284 [INFO][5742] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.290 [WARNING][5742] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.290 [INFO][5742] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.292 [INFO][5742] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.294938 env[1412]: 2024-02-09 19:05:59.293 [INFO][5736] k8s.go 591: Teardown processing complete. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.295783 env[1412]: time="2024-02-09T19:05:59.294981478Z" level=info msg="TearDown network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" successfully" Feb 9 19:05:59.295783 env[1412]: time="2024-02-09T19:05:59.295045880Z" level=info msg="StopPodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" returns successfully" Feb 9 19:05:59.295783 env[1412]: time="2024-02-09T19:05:59.295687601Z" level=info msg="RemovePodSandbox for \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" Feb 9 19:05:59.295783 env[1412]: time="2024-02-09T19:05:59.295735102Z" level=info msg="Forcibly stopping sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\"" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.334 [WARNING][5761] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"74ec09d8-0957-412d-816d-b7f3528f1e43", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"fa3ee4436f676031c49bbddd7b9f8b355c8a955fdfb6fe78f9e594f2966f1ef8", Pod:"coredns-787d4945fb-whh7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc9869b55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.334 [INFO][5761] k8s.go 578: Cleaning up netns ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.334 [INFO][5761] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" iface="eth0" netns="" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.334 [INFO][5761] k8s.go 585: Releasing IP address(es) ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.334 [INFO][5761] utils.go 188: Calico CNI releasing IP address ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.353 [INFO][5768] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.353 [INFO][5768] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.354 [INFO][5768] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.361 [WARNING][5768] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.361 [INFO][5768] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" HandleID="k8s-pod-network.ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Workload="ci--3510.3.2--a--00ed68a33d-k8s-coredns--787d4945fb--whh7p-eth0" Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.362 [INFO][5768] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.364760 env[1412]: 2024-02-09 19:05:59.363 [INFO][5761] k8s.go 591: Teardown processing complete. ContainerID="ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455" Feb 9 19:05:59.365579 env[1412]: time="2024-02-09T19:05:59.364798808Z" level=info msg="TearDown network for sandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" successfully" Feb 9 19:05:59.374760 env[1412]: time="2024-02-09T19:05:59.374644422Z" level=info msg="RemovePodSandbox \"ca4c512f354e2874785ce1ee902dcc1f5f9a0680c51a26939b4294569e154455\" returns successfully" Feb 9 19:05:59.375760 env[1412]: time="2024-02-09T19:05:59.375732557Z" level=info msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.413 [WARNING][5787] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b139bbd0-9b20-41a8-9896-7f2a7ac77265", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2", Pod:"csi-node-driver-r2kjb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib2a10f52b51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.413 [INFO][5787] k8s.go 578: Cleaning up netns ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.414 [INFO][5787] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" iface="eth0" netns="" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.414 [INFO][5787] k8s.go 585: Releasing IP address(es) ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.414 [INFO][5787] utils.go 188: Calico CNI releasing IP address ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.436 [INFO][5793] ipam_plugin.go 415: Releasing address using handleID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.436 [INFO][5793] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.436 [INFO][5793] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.444 [WARNING][5793] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.444 [INFO][5793] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.445 [INFO][5793] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.449257 env[1412]: 2024-02-09 19:05:59.446 [INFO][5787] k8s.go 591: Teardown processing complete. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.449257 env[1412]: time="2024-02-09T19:05:59.448093168Z" level=info msg="TearDown network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" successfully" Feb 9 19:05:59.449257 env[1412]: time="2024-02-09T19:05:59.448154370Z" level=info msg="StopPodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" returns successfully" Feb 9 19:05:59.450130 env[1412]: time="2024-02-09T19:05:59.449281906Z" level=info msg="RemovePodSandbox for \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" Feb 9 19:05:59.450130 env[1412]: time="2024-02-09T19:05:59.449343708Z" level=info msg="Forcibly stopping sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\"" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.489 [WARNING][5811] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b139bbd0-9b20-41a8-9896-7f2a7ac77265", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 4, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-00ed68a33d", ContainerID:"c4745190de19d2d792818d4def785a4717b14193b955c277bb243b51d49a75b2", Pod:"csi-node-driver-r2kjb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib2a10f52b51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.490 [INFO][5811] k8s.go 578: Cleaning up netns ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.490 [INFO][5811] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" iface="eth0" netns="" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.490 [INFO][5811] k8s.go 585: Releasing IP address(es) ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.490 [INFO][5811] utils.go 188: Calico CNI releasing IP address ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.511 [INFO][5817] ipam_plugin.go 415: Releasing address using handleID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.511 [INFO][5817] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.511 [INFO][5817] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.518 [WARNING][5817] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.518 [INFO][5817] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" HandleID="k8s-pod-network.1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Workload="ci--3510.3.2--a--00ed68a33d-k8s-csi--node--driver--r2kjb-eth0" Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.519 [INFO][5817] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:05:59.521949 env[1412]: 2024-02-09 19:05:59.520 [INFO][5811] k8s.go 591: Teardown processing complete. ContainerID="1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49" Feb 9 19:05:59.522632 env[1412]: time="2024-02-09T19:05:59.522583847Z" level=info msg="TearDown network for sandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" successfully" Feb 9 19:05:59.532224 env[1412]: time="2024-02-09T19:05:59.532166453Z" level=info msg="RemovePodSandbox \"1f58979f77e3dddd4dbbe5a1783773cd12d470bf2511e9c15e04ba2c5a408b49\" returns successfully" Feb 9 19:06:09.924264 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.CMDScc.mount: Deactivated successfully. Feb 9 19:06:17.429278 systemd[1]: run-containerd-runc-k8s.io-770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72-runc.1Kd1s0.mount: Deactivated successfully. Feb 9 19:06:17.460570 systemd[1]: run-containerd-runc-k8s.io-6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924-runc.5OIKYZ.mount: Deactivated successfully. Feb 9 19:06:21.120012 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.VWHxjh.mount: Deactivated successfully. Feb 9 19:06:21.181560 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.ue889d.mount: Deactivated successfully. Feb 9 19:06:39.931923 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.iqv5T3.mount: Deactivated successfully. Feb 9 19:06:42.890099 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:06:42.890346 kernel: audit: type=1130 audit(1707505602.865:343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.37:22-10.200.12.6:39580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.37:22-10.200.12.6:39580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:42.866624 systemd[1]: Started sshd@7-10.200.8.37:22-10.200.12.6:39580.service. Feb 9 19:06:43.486000 audit[5975]: USER_ACCT pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.507853 sshd[5975]: Accepted publickey for core from 10.200.12.6 port 39580 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:43.508364 kernel: audit: type=1101 audit(1707505603.486:344): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.508568 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:43.506000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.518031 systemd[1]: Started session-10.scope. Feb 9 19:06:43.519048 systemd-logind[1374]: New session 10 of user core. Feb 9 19:06:43.529048 kernel: audit: type=1103 audit(1707505603.506:345): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.506000 audit[5975]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd31a77900 a2=3 a3=0 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:43.542114 kernel: audit: type=1006 audit(1707505603.506:346): pid=5975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 19:06:43.542169 kernel: audit: type=1300 audit(1707505603.506:346): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd31a77900 a2=3 a3=0 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:43.506000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:43.524000 audit[5975]: USER_START pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.588053 kernel: audit: type=1327 audit(1707505603.506:346): proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:43.588177 kernel: audit: type=1105 audit(1707505603.524:347): pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.529000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:43.605071 kernel: audit: type=1103 audit(1707505603.529:348): pid=5978 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:44.036878 sshd[5975]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:44.037000 audit[5975]: USER_END pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:44.054554 systemd[1]: sshd@7-10.200.8.37:22-10.200.12.6:39580.service: Deactivated successfully. Feb 9 19:06:44.055670 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:06:44.057806 systemd-logind[1374]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:06:44.059560 systemd-logind[1374]: Removed session 10. Feb 9 19:06:44.037000 audit[5975]: CRED_DISP pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:44.078151 kernel: audit: type=1106 audit(1707505604.037:349): pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:44.078353 kernel: audit: type=1104 audit(1707505604.037:350): pid=5975 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:44.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.37:22-10.200.12.6:39580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:47.431654 systemd[1]: run-containerd-runc-k8s.io-770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72-runc.FKY0O3.mount: Deactivated successfully. Feb 9 19:06:47.478084 systemd[1]: run-containerd-runc-k8s.io-6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924-runc.sOEj0w.mount: Deactivated successfully. Feb 9 19:06:49.149716 systemd[1]: Started sshd@8-10.200.8.37:22-10.200.12.6:60710.service. Feb 9 19:06:49.175650 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:06:49.175796 kernel: audit: type=1130 audit(1707505609.150:352): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.37:22-10.200.12.6:60710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:49.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.37:22-10.200.12.6:60710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:49.861000 audit[6028]: USER_ACCT pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.883122 kernel: audit: type=1101 audit(1707505609.861:353): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.883184 sshd[6028]: Accepted publickey for core from 10.200.12.6 port 60710 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:49.882000 audit[6028]: CRED_ACQ pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.883846 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:49.889338 systemd-logind[1374]: New session 11 of user core. Feb 9 19:06:49.892924 systemd[1]: Started session-11.scope. Feb 9 19:06:49.913960 kernel: audit: type=1103 audit(1707505609.882:354): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.914150 kernel: audit: type=1006 audit(1707505609.882:355): pid=6028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:06:49.914180 kernel: audit: type=1300 audit(1707505609.882:355): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc37197130 a2=3 a3=0 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:49.882000 audit[6028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc37197130 a2=3 a3=0 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:49.933365 kernel: audit: type=1327 audit(1707505609.882:355): proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:49.882000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:49.898000 audit[6028]: USER_START pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.958165 kernel: audit: type=1105 audit(1707505609.898:356): pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.958341 kernel: audit: type=1103 audit(1707505609.898:357): pid=6030 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:49.898000 audit[6030]: CRED_ACQ pid=6030 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:50.450581 sshd[6028]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:50.451000 audit[6028]: USER_END pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:50.454996 systemd-logind[1374]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:06:50.456599 systemd[1]: sshd@8-10.200.8.37:22-10.200.12.6:60710.service: Deactivated successfully. Feb 9 19:06:50.457649 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:06:50.459297 systemd-logind[1374]: Removed session 11. Feb 9 19:06:50.451000 audit[6028]: CRED_DISP pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:50.474051 kernel: audit: type=1106 audit(1707505610.451:358): pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:50.474135 kernel: audit: type=1104 audit(1707505610.451:359): pid=6028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:50.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.37:22-10.200.12.6:60710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:55.814679 systemd[1]: Started sshd@9-10.200.8.37:22-10.200.12.6:60714.service. Feb 9 19:06:55.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.37:22-10.200.12.6:60714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:55.822334 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:06:55.822387 kernel: audit: type=1130 audit(1707505615.814:361): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.37:22-10.200.12.6:60714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:06:57.125000 audit[6060]: USER_ACCT pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.145671 sshd[6060]: Accepted publickey for core from 10.200.12.6 port 60714 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:57.146265 kernel: audit: type=1101 audit(1707505617.125:362): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.145000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.146914 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:57.157669 systemd-logind[1374]: New session 12 of user core. Feb 9 19:06:57.159181 systemd[1]: Started session-12.scope. Feb 9 19:06:57.167290 kernel: audit: type=1103 audit(1707505617.145:363): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.145000 audit[6060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe31ee60a0 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:57.198594 kernel: audit: type=1006 audit(1707505617.145:364): pid=6060 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:06:57.198776 kernel: audit: type=1300 audit(1707505617.145:364): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe31ee60a0 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:06:57.198807 kernel: audit: type=1327 audit(1707505617.145:364): proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:57.145000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:06:57.205962 kernel: audit: type=1105 audit(1707505617.165:365): pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.165000 audit[6060]: USER_START pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.168000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.241240 kernel: audit: type=1103 audit(1707505617.168:366): pid=6063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.809945 sshd[6060]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:57.810000 audit[6060]: USER_END pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.813856 systemd-logind[1374]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:06:57.815610 systemd[1]: sshd@9-10.200.8.37:22-10.200.12.6:60714.service: Deactivated successfully. Feb 9 19:06:57.817253 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:06:57.818576 systemd-logind[1374]: Removed session 12. Feb 9 19:06:57.810000 audit[6060]: CRED_DISP pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.847254 kernel: audit: type=1106 audit(1707505617.810:367): pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.847488 kernel: audit: type=1104 audit(1707505617.810:368): pid=6060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:06:57.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.37:22-10.200.12.6:60714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:02.940564 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:07:02.940752 kernel: audit: type=1130 audit(1707505622.916:370): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.37:22-10.200.12.6:52744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:02.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.37:22-10.200.12.6:52744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:02.917383 systemd[1]: Started sshd@10-10.200.8.37:22-10.200.12.6:52744.service. Feb 9 19:07:03.537000 audit[6077]: USER_ACCT pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.540312 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:03.558258 kernel: audit: type=1101 audit(1707505623.537:371): pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.558299 sshd[6077]: Accepted publickey for core from 10.200.12.6 port 52744 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:03.538000 audit[6077]: CRED_ACQ pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.564236 systemd[1]: Started session-13.scope. Feb 9 19:07:03.565704 systemd-logind[1374]: New session 13 of user core. Feb 9 19:07:03.588156 kernel: audit: type=1103 audit(1707505623.538:372): pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.588320 kernel: audit: type=1006 audit(1707505623.538:373): pid=6077 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 9 19:07:03.588348 kernel: audit: type=1300 audit(1707505623.538:373): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe693eb880 a2=3 a3=0 items=0 ppid=1 pid=6077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:03.538000 audit[6077]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe693eb880 a2=3 a3=0 items=0 ppid=1 pid=6077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:03.607281 kernel: audit: type=1327 audit(1707505623.538:373): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:03.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:03.569000 audit[6077]: USER_START pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.632185 kernel: audit: type=1105 audit(1707505623.569:374): pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.632386 kernel: audit: type=1103 audit(1707505623.575:375): pid=6079 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:03.575000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.045735 sshd[6077]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:04.046000 audit[6077]: USER_END pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.050639 systemd-logind[1374]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:07:04.052231 systemd[1]: sshd@10-10.200.8.37:22-10.200.12.6:52744.service: Deactivated successfully. Feb 9 19:07:04.053327 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:07:04.055085 systemd-logind[1374]: Removed session 13. Feb 9 19:07:04.046000 audit[6077]: CRED_DISP pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.083897 kernel: audit: type=1106 audit(1707505624.046:376): pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.084103 kernel: audit: type=1104 audit(1707505624.046:377): pid=6077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.37:22-10.200.12.6:52744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:04.149480 systemd[1]: Started sshd@11-10.200.8.37:22-10.200.12.6:52758.service. Feb 9 19:07:04.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.37:22-10.200.12.6:52758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:04.769000 audit[6091]: USER_ACCT pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.771164 sshd[6091]: Accepted publickey for core from 10.200.12.6 port 52758 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:04.771000 audit[6091]: CRED_ACQ pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.771000 audit[6091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff10c25fe0 a2=3 a3=0 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:04.771000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:04.772937 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:04.781356 systemd[1]: Started session-14.scope. Feb 9 19:07:04.781842 systemd-logind[1374]: New session 14 of user core. Feb 9 19:07:04.792000 audit[6091]: USER_START pid=6091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:04.794000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:06.385411 sshd[6091]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:06.386000 audit[6091]: USER_END pid=6091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:06.386000 audit[6091]: CRED_DISP pid=6091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:06.389867 systemd[1]: sshd@11-10.200.8.37:22-10.200.12.6:52758.service: Deactivated successfully. Feb 9 19:07:06.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.37:22-10.200.12.6:52758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:06.391746 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:07:06.392381 systemd-logind[1374]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:07:06.394550 systemd-logind[1374]: Removed session 14. Feb 9 19:07:06.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.37:22-10.200.12.6:52766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:06.490399 systemd[1]: Started sshd@12-10.200.8.37:22-10.200.12.6:52766.service. Feb 9 19:07:07.110000 audit[6102]: USER_ACCT pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.112300 sshd[6102]: Accepted publickey for core from 10.200.12.6 port 52766 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:07.112000 audit[6102]: CRED_ACQ pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.112000 audit[6102]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5fddd6c0 a2=3 a3=0 items=0 ppid=1 pid=6102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:07.112000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:07.114384 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:07.122059 systemd[1]: Started session-15.scope. Feb 9 19:07:07.123698 systemd-logind[1374]: New session 15 of user core. Feb 9 19:07:07.130000 audit[6102]: USER_START pid=6102 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.132000 audit[6105]: CRED_ACQ pid=6105 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.619881 sshd[6102]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:07.620000 audit[6102]: USER_END pid=6102 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.621000 audit[6102]: CRED_DISP pid=6102 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:07.624423 systemd[1]: sshd@12-10.200.8.37:22-10.200.12.6:52766.service: Deactivated successfully. Feb 9 19:07:07.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.37:22-10.200.12.6:52766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:07.628080 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:07:07.628873 systemd-logind[1374]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:07:07.630839 systemd-logind[1374]: Removed session 15. Feb 9 19:07:09.923228 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.2A9Oun.mount: Deactivated successfully. Feb 9 19:07:12.737413 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:07:12.737580 kernel: audit: type=1130 audit(1707505632.724:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.37:22-10.200.12.6:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:12.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.37:22-10.200.12.6:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:12.725337 systemd[1]: Started sshd@13-10.200.8.37:22-10.200.12.6:55894.service. Feb 9 19:07:13.346000 audit[6144]: USER_ACCT pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.348444 sshd[6144]: Accepted publickey for core from 10.200.12.6 port 55894 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:13.368106 kernel: audit: type=1101 audit(1707505633.346:398): pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.368589 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:13.366000 audit[6144]: CRED_ACQ pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.384155 systemd[1]: Started session-16.scope. Feb 9 19:07:13.385616 systemd-logind[1374]: New session 16 of user core. Feb 9 19:07:13.388071 kernel: audit: type=1103 audit(1707505633.366:399): pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.366000 audit[6144]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd25605400 a2=3 a3=0 items=0 ppid=1 pid=6144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:13.423344 kernel: audit: type=1006 audit(1707505633.366:400): pid=6144 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 19:07:13.423485 kernel: audit: type=1300 audit(1707505633.366:400): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd25605400 a2=3 a3=0 items=0 ppid=1 pid=6144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:13.366000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:13.392000 audit[6144]: USER_START pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.431038 kernel: audit: type=1327 audit(1707505633.366:400): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:13.431089 kernel: audit: type=1105 audit(1707505633.392:401): pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.398000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.466067 kernel: audit: type=1103 audit(1707505633.398:402): pid=6147 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.860742 sshd[6144]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:13.861000 audit[6144]: USER_END pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.867012 systemd-logind[1374]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:07:13.868683 systemd[1]: sshd@13-10.200.8.37:22-10.200.12.6:55894.service: Deactivated successfully. Feb 9 19:07:13.869791 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:07:13.871738 systemd-logind[1374]: Removed session 16. Feb 9 19:07:13.862000 audit[6144]: CRED_DISP pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.900135 kernel: audit: type=1106 audit(1707505633.861:403): pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.900317 kernel: audit: type=1104 audit(1707505633.862:404): pid=6144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:13.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.37:22-10.200.12.6:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:17.466672 systemd[1]: run-containerd-runc-k8s.io-6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924-runc.20NzGv.mount: Deactivated successfully. Feb 9 19:07:18.990178 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:07:18.990517 kernel: audit: type=1130 audit(1707505638.966:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.37:22-10.200.12.6:53792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:18.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.37:22-10.200.12.6:53792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:18.966712 systemd[1]: Started sshd@14-10.200.8.37:22-10.200.12.6:53792.service. Feb 9 19:07:19.591000 audit[6194]: USER_ACCT pid=6194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.594140 sshd[6194]: Accepted publickey for core from 10.200.12.6 port 53792 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:19.610000 audit[6194]: CRED_ACQ pid=6194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.611876 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:19.622357 systemd[1]: Started session-17.scope. Feb 9 19:07:19.623360 systemd-logind[1374]: New session 17 of user core. Feb 9 19:07:19.630992 kernel: audit: type=1101 audit(1707505639.591:407): pid=6194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.631107 kernel: audit: type=1103 audit(1707505639.610:408): pid=6194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.610000 audit[6194]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc00f1e710 a2=3 a3=0 items=0 ppid=1 pid=6194 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:19.660292 kernel: audit: type=1006 audit(1707505639.610:409): pid=6194 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:07:19.660388 kernel: audit: type=1300 audit(1707505639.610:409): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc00f1e710 a2=3 a3=0 items=0 ppid=1 pid=6194 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:19.610000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:19.666515 kernel: audit: type=1327 audit(1707505639.610:409): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:19.626000 audit[6194]: USER_START pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.667041 kernel: audit: type=1105 audit(1707505639.626:410): pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.630000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:19.686044 kernel: audit: type=1103 audit(1707505639.630:411): pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:20.117273 sshd[6194]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:20.117000 audit[6194]: USER_END pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:20.121702 systemd-logind[1374]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:07:20.123002 systemd[1]: sshd@14-10.200.8.37:22-10.200.12.6:53792.service: Deactivated successfully. Feb 9 19:07:20.124079 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:07:20.125451 systemd-logind[1374]: Removed session 17. Feb 9 19:07:20.118000 audit[6194]: CRED_DISP pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:20.154415 kernel: audit: type=1106 audit(1707505640.117:412): pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:20.154524 kernel: audit: type=1104 audit(1707505640.118:413): pid=6194 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:20.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.37:22-10.200.12.6:53792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:21.126348 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.U8XNTS.mount: Deactivated successfully. Feb 9 19:07:21.185763 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.1Jie7K.mount: Deactivated successfully. Feb 9 19:07:25.222476 systemd[1]: Started sshd@15-10.200.8.37:22-10.200.12.6:53794.service. Feb 9 19:07:25.247804 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:07:25.248011 kernel: audit: type=1130 audit(1707505645.221:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.37:22-10.200.12.6:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:25.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.37:22-10.200.12.6:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:25.842000 audit[6245]: USER_ACCT pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.864205 kernel: audit: type=1101 audit(1707505645.842:416): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.864263 sshd[6245]: Accepted publickey for core from 10.200.12.6 port 53794 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:25.864659 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:25.863000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.870944 systemd[1]: Started session-18.scope. Feb 9 19:07:25.872547 systemd-logind[1374]: New session 18 of user core. Feb 9 19:07:25.885099 kernel: audit: type=1103 audit(1707505645.863:417): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.897260 kernel: audit: type=1006 audit(1707505645.863:418): pid=6245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 19:07:25.897353 kernel: audit: type=1300 audit(1707505645.863:418): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdcd1df80 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:25.863000 audit[6245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdcd1df80 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:25.916779 kernel: audit: type=1327 audit(1707505645.863:418): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:25.863000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:25.877000 audit[6245]: USER_START pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.940999 kernel: audit: type=1105 audit(1707505645.877:419): pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.885000 audit[6247]: CRED_ACQ pid=6247 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:25.957251 kernel: audit: type=1103 audit(1707505645.885:420): pid=6247 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:26.354773 sshd[6245]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:26.355000 audit[6245]: USER_END pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:26.359519 systemd[1]: sshd@15-10.200.8.37:22-10.200.12.6:53794.service: Deactivated successfully. Feb 9 19:07:26.360841 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:07:26.369092 systemd-logind[1374]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:07:26.370220 systemd-logind[1374]: Removed session 18. Feb 9 19:07:26.377071 kernel: audit: type=1106 audit(1707505646.355:421): pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:26.356000 audit[6245]: CRED_DISP pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:26.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.37:22-10.200.12.6:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:26.396071 kernel: audit: type=1104 audit(1707505646.356:422): pid=6245 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:26.457885 systemd[1]: Started sshd@16-10.200.8.37:22-10.200.12.6:53806.service. Feb 9 19:07:26.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.37:22-10.200.12.6:53806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:27.072000 audit[6258]: USER_ACCT pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.074000 audit[6258]: CRED_ACQ pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.074000 audit[6258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed4e05870 a2=3 a3=0 items=0 ppid=1 pid=6258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:27.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:27.081239 systemd[1]: Started session-19.scope. Feb 9 19:07:27.075549 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:27.088605 sshd[6258]: Accepted publickey for core from 10.200.12.6 port 53806 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:27.081938 systemd-logind[1374]: New session 19 of user core. Feb 9 19:07:27.090000 audit[6258]: USER_START pid=6258 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.092000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.654978 sshd[6258]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:27.656000 audit[6258]: USER_END pid=6258 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.656000 audit[6258]: CRED_DISP pid=6258 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:27.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.37:22-10.200.12.6:53806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:27.659493 systemd[1]: sshd@16-10.200.8.37:22-10.200.12.6:53806.service: Deactivated successfully. Feb 9 19:07:27.662674 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:07:27.663097 systemd-logind[1374]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:07:27.665784 systemd-logind[1374]: Removed session 19. Feb 9 19:07:27.761521 systemd[1]: Started sshd@17-10.200.8.37:22-10.200.12.6:53940.service. Feb 9 19:07:27.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.37:22-10.200.12.6:53940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:28.404000 audit[6269]: USER_ACCT pid=6269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:28.405894 sshd[6269]: Accepted publickey for core from 10.200.12.6 port 53940 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:28.406000 audit[6269]: CRED_ACQ pid=6269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:28.406000 audit[6269]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51162dd0 a2=3 a3=0 items=0 ppid=1 pid=6269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:28.406000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:28.407670 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:28.414973 systemd[1]: Started session-20.scope. Feb 9 19:07:28.415492 systemd-logind[1374]: New session 20 of user core. Feb 9 19:07:28.426000 audit[6269]: USER_START pid=6269 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:28.430000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:29.951000 audit[6310]: NETFILTER_CFG table=filter:147 family=2 entries=18 op=nft_register_rule pid=6310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.951000 audit[6310]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe106a8770 a2=0 a3=7ffe106a875c items=0 ppid=2744 pid=6310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:29.955000 audit[6310]: NETFILTER_CFG table=nat:148 family=2 entries=94 op=nft_register_rule pid=6310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:29.955000 audit[6310]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe106a8770 a2=0 a3=7ffe106a875c items=0 ppid=2744 pid=6310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:29.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:29.961444 sshd[6269]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:29.961000 audit[6269]: USER_END pid=6269 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:29.961000 audit[6269]: CRED_DISP pid=6269 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:29.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.37:22-10.200.12.6:53940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:29.965209 systemd[1]: sshd@17-10.200.8.37:22-10.200.12.6:53940.service: Deactivated successfully. Feb 9 19:07:29.967963 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:07:29.968269 systemd-logind[1374]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:07:29.970752 systemd-logind[1374]: Removed session 20. Feb 9 19:07:30.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.37:22-10.200.12.6:53954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:30.063865 systemd[1]: Started sshd@18-10.200.8.37:22-10.200.12.6:53954.service. Feb 9 19:07:30.083000 audit[6339]: NETFILTER_CFG table=filter:149 family=2 entries=30 op=nft_register_rule pid=6339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.083000 audit[6339]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffdd3b60f40 a2=0 a3=7ffdd3b60f2c items=0 ppid=2744 pid=6339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.102000 audit[6339]: NETFILTER_CFG table=nat:150 family=2 entries=94 op=nft_register_rule pid=6339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:30.102000 audit[6339]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffdd3b60f40 a2=0 a3=7ffdd3b60f2c items=0 ppid=2744 pid=6339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:30.699584 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 9 19:07:30.699770 kernel: audit: type=1101 audit(1707505650.692:447): pid=6338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.692000 audit[6338]: USER_ACCT pid=6338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.699143 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:30.700250 sshd[6338]: Accepted publickey for core from 10.200.12.6 port 53954 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:30.706974 systemd[1]: Started session-21.scope. Feb 9 19:07:30.708221 systemd-logind[1374]: New session 21 of user core. Feb 9 19:07:30.697000 audit[6338]: CRED_ACQ pid=6338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.750885 kernel: audit: type=1103 audit(1707505650.697:448): pid=6338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.751158 kernel: audit: type=1006 audit(1707505650.697:449): pid=6338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Feb 9 19:07:30.697000 audit[6338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc954ba1c0 a2=3 a3=0 items=0 ppid=1 pid=6338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.697000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:30.777883 kernel: audit: type=1300 audit(1707505650.697:449): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc954ba1c0 a2=3 a3=0 items=0 ppid=1 pid=6338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:30.778096 kernel: audit: type=1327 audit(1707505650.697:449): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:30.778137 kernel: audit: type=1105 audit(1707505650.712:450): pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.712000 audit[6338]: USER_START pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.716000 audit[6341]: CRED_ACQ pid=6341 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:30.814111 kernel: audit: type=1103 audit(1707505650.716:451): pid=6341 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:31.486384 sshd[6338]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:31.487000 audit[6338]: USER_END pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:31.498709 systemd[1]: sshd@18-10.200.8.37:22-10.200.12.6:53954.service: Deactivated successfully. Feb 9 19:07:31.499884 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:07:31.501485 systemd-logind[1374]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:07:31.502719 systemd-logind[1374]: Removed session 21. Feb 9 19:07:31.493000 audit[6338]: CRED_DISP pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:31.527835 kernel: audit: type=1106 audit(1707505651.487:452): pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:31.528081 kernel: audit: type=1104 audit(1707505651.493:453): pid=6338 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:31.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.37:22-10.200.12.6:53954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:31.546055 kernel: audit: type=1131 audit(1707505651.498:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.37:22-10.200.12.6:53954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:31.590574 systemd[1]: Started sshd@19-10.200.8.37:22-10.200.12.6:53966.service. Feb 9 19:07:31.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.37:22-10.200.12.6:53966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:32.211000 audit[6351]: USER_ACCT pid=6351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.212877 sshd[6351]: Accepted publickey for core from 10.200.12.6 port 53966 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:32.212000 audit[6351]: CRED_ACQ pid=6351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.213000 audit[6351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb335ff90 a2=3 a3=0 items=0 ppid=1 pid=6351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:32.213000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:32.217215 sshd[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:32.223602 systemd[1]: Started session-22.scope. Feb 9 19:07:32.224394 systemd-logind[1374]: New session 22 of user core. Feb 9 19:07:32.229000 audit[6351]: USER_START pid=6351 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.231000 audit[6354]: CRED_ACQ pid=6354 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.713974 sshd[6351]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:32.714000 audit[6351]: USER_END pid=6351 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.715000 audit[6351]: CRED_DISP pid=6351 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:32.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.37:22-10.200.12.6:53966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:32.719380 systemd[1]: sshd@19-10.200.8.37:22-10.200.12.6:53966.service: Deactivated successfully. Feb 9 19:07:32.720232 systemd-logind[1374]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:07:32.724292 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:07:32.728248 systemd-logind[1374]: Removed session 22. Feb 9 19:07:37.136000 audit[6389]: NETFILTER_CFG table=filter:151 family=2 entries=18 op=nft_register_rule pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.143330 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 19:07:37.143381 kernel: audit: type=1325 audit(1707505657.136:464): table=filter:151 family=2 entries=18 op=nft_register_rule pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.136000 audit[6389]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffcc76ceb0 a2=0 a3=7fffcc76ce9c items=0 ppid=2744 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.179053 kernel: audit: type=1300 audit(1707505657.136:464): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffcc76ceb0 a2=0 a3=7fffcc76ce9c items=0 ppid=2744 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.179220 kernel: audit: type=1327 audit(1707505657.136:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.137000 audit[6389]: NETFILTER_CFG table=nat:152 family=2 entries=178 op=nft_register_chain pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.199319 kernel: audit: type=1325 audit(1707505657.137:465): table=nat:152 family=2 entries=178 op=nft_register_chain pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:07:37.199477 kernel: audit: type=1300 audit(1707505657.137:465): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fffcc76ceb0 a2=0 a3=7fffcc76ce9c items=0 ppid=2744 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.137000 audit[6389]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fffcc76ceb0 a2=0 a3=7fffcc76ce9c items=0 ppid=2744 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:37.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.220041 kernel: audit: type=1327 audit(1707505657.137:465): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:07:37.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.37:22-10.200.12.6:34412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:37.819802 systemd[1]: Started sshd@20-10.200.8.37:22-10.200.12.6:34412.service. Feb 9 19:07:37.839097 kernel: audit: type=1130 audit(1707505657.819:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.37:22-10.200.12.6:34412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:38.444000 audit[6391]: USER_ACCT pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.446003 sshd[6391]: Accepted publickey for core from 10.200.12.6 port 34412 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:38.465057 kernel: audit: type=1101 audit(1707505658.444:467): pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.465765 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:38.464000 audit[6391]: CRED_ACQ pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.472875 systemd[1]: Started session-23.scope. Feb 9 19:07:38.474130 systemd-logind[1374]: New session 23 of user core. Feb 9 19:07:38.486054 kernel: audit: type=1103 audit(1707505658.464:468): pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.464000 audit[6391]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1b03f990 a2=3 a3=0 items=0 ppid=1 pid=6391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:38.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:38.476000 audit[6391]: USER_START pid=6391 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.498104 kernel: audit: type=1006 audit(1707505658.464:469): pid=6391 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:07:38.486000 audit[6393]: CRED_ACQ pid=6393 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.948560 sshd[6391]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:38.949000 audit[6391]: USER_END pid=6391 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.949000 audit[6391]: CRED_DISP pid=6391 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:38.952364 systemd[1]: sshd@20-10.200.8.37:22-10.200.12.6:34412.service: Deactivated successfully. Feb 9 19:07:38.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.37:22-10.200.12.6:34412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:38.954642 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:07:38.954984 systemd-logind[1374]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:07:38.956373 systemd-logind[1374]: Removed session 23. Feb 9 19:07:39.921122 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.qnICS9.mount: Deactivated successfully. Feb 9 19:07:44.065389 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:07:44.065566 kernel: audit: type=1130 audit(1707505664.052:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.37:22-10.200.12.6:34428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:44.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.37:22-10.200.12.6:34428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:44.053002 systemd[1]: Started sshd@21-10.200.8.37:22-10.200.12.6:34428.service. Feb 9 19:07:44.675000 audit[6436]: USER_ACCT pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.676652 sshd[6436]: Accepted publickey for core from 10.200.12.6 port 34428 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:44.696088 kernel: audit: type=1101 audit(1707505664.675:476): pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.695000 audit[6436]: CRED_ACQ pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.697289 sshd[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:44.707432 systemd[1]: Started session-24.scope. Feb 9 19:07:44.709763 systemd-logind[1374]: New session 24 of user core. Feb 9 19:07:44.720060 kernel: audit: type=1103 audit(1707505664.695:477): pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.695000 audit[6436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc71729600 a2=3 a3=0 items=0 ppid=1 pid=6436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:44.738118 kernel: audit: type=1006 audit(1707505664.695:478): pid=6436 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 19:07:44.738204 kernel: audit: type=1300 audit(1707505664.695:478): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc71729600 a2=3 a3=0 items=0 ppid=1 pid=6436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:44.758855 kernel: audit: type=1327 audit(1707505664.695:478): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:44.695000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:44.717000 audit[6436]: USER_START pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.787214 kernel: audit: type=1105 audit(1707505664.717:479): pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.787472 kernel: audit: type=1103 audit(1707505664.722:480): pid=6439 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:44.722000 audit[6439]: CRED_ACQ pid=6439 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:45.207105 sshd[6436]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:45.208000 audit[6436]: USER_END pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:45.212638 systemd[1]: sshd@21-10.200.8.37:22-10.200.12.6:34428.service: Deactivated successfully. Feb 9 19:07:45.214871 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:07:45.217811 systemd-logind[1374]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:07:45.219554 systemd-logind[1374]: Removed session 24. Feb 9 19:07:45.209000 audit[6436]: CRED_DISP pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:45.249671 kernel: audit: type=1106 audit(1707505665.208:481): pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:45.250011 kernel: audit: type=1104 audit(1707505665.209:482): pid=6436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:45.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.37:22-10.200.12.6:34428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:50.314465 systemd[1]: Started sshd@22-10.200.8.37:22-10.200.12.6:56308.service. Feb 9 19:07:50.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.37:22-10.200.12.6:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:50.319634 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:07:50.319764 kernel: audit: type=1130 audit(1707505670.313:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.37:22-10.200.12.6:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:50.934000 audit[6486]: USER_ACCT pid=6486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:50.956655 sshd[6486]: Accepted publickey for core from 10.200.12.6 port 56308 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:50.957218 kernel: audit: type=1101 audit(1707505670.934:485): pid=6486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:50.955000 audit[6486]: CRED_ACQ pid=6486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:50.957543 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:50.964321 systemd[1]: Started session-25.scope. Feb 9 19:07:50.965526 systemd-logind[1374]: New session 25 of user core. Feb 9 19:07:50.976044 kernel: audit: type=1103 audit(1707505670.955:486): pid=6486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:50.991042 kernel: audit: type=1006 audit(1707505670.956:487): pid=6486 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:07:50.991136 kernel: audit: type=1300 audit(1707505670.956:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4d530ca0 a2=3 a3=0 items=0 ppid=1 pid=6486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:50.956000 audit[6486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4d530ca0 a2=3 a3=0 items=0 ppid=1 pid=6486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:50.956000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:51.008040 kernel: audit: type=1327 audit(1707505670.956:487): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:50.970000 audit[6486]: USER_START pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.033289 kernel: audit: type=1105 audit(1707505670.970:488): pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:50.977000 audit[6494]: CRED_ACQ pid=6494 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.053077 kernel: audit: type=1103 audit(1707505670.977:489): pid=6494 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.120724 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.UYPXwf.mount: Deactivated successfully. Feb 9 19:07:51.439138 sshd[6486]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:51.439000 audit[6486]: USER_END pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.444729 systemd-logind[1374]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:07:51.446423 systemd[1]: sshd@22-10.200.8.37:22-10.200.12.6:56308.service: Deactivated successfully. Feb 9 19:07:51.447498 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:07:51.449132 systemd-logind[1374]: Removed session 25. Feb 9 19:07:51.461049 kernel: audit: type=1106 audit(1707505671.439:490): pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.439000 audit[6486]: CRED_DISP pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:51.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.37:22-10.200.12.6:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:51.478061 kernel: audit: type=1104 audit(1707505671.439:491): pid=6486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:56.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.37:22-10.200.12.6:56314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:56.545873 systemd[1]: Started sshd@23-10.200.8.37:22-10.200.12.6:56314.service. Feb 9 19:07:56.551279 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:07:56.551420 kernel: audit: type=1130 audit(1707505676.545:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.37:22-10.200.12.6:56314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:07:57.166000 audit[6522]: USER_ACCT pid=6522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.187435 sshd[6522]: Accepted publickey for core from 10.200.12.6 port 56314 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:57.187905 sshd[6522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:57.188205 kernel: audit: type=1101 audit(1707505677.166:494): pid=6522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.186000 audit[6522]: CRED_ACQ pid=6522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.194249 systemd[1]: Started session-26.scope. Feb 9 19:07:57.195292 systemd-logind[1374]: New session 26 of user core. Feb 9 19:07:57.208086 kernel: audit: type=1103 audit(1707505677.186:495): pid=6522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.186000 audit[6522]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff75bdcc00 a2=3 a3=0 items=0 ppid=1 pid=6522 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:57.240658 kernel: audit: type=1006 audit(1707505677.186:496): pid=6522 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 19:07:57.240875 kernel: audit: type=1300 audit(1707505677.186:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff75bdcc00 a2=3 a3=0 items=0 ppid=1 pid=6522 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:07:57.240908 kernel: audit: type=1327 audit(1707505677.186:496): proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:57.186000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:07:57.203000 audit[6522]: USER_START pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.248048 kernel: audit: type=1105 audit(1707505677.203:497): pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.267236 kernel: audit: type=1103 audit(1707505677.209:498): pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.209000 audit[6525]: CRED_ACQ pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.671333 sshd[6522]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:57.672000 audit[6522]: USER_END pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.675807 systemd[1]: sshd@23-10.200.8.37:22-10.200.12.6:56314.service: Deactivated successfully. Feb 9 19:07:57.677136 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:07:57.685087 systemd-logind[1374]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:07:57.686387 systemd-logind[1374]: Removed session 26. Feb 9 19:07:57.672000 audit[6522]: CRED_DISP pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.707761 kernel: audit: type=1106 audit(1707505677.672:499): pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.707986 kernel: audit: type=1104 audit(1707505677.672:500): pid=6522 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:07:57.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.37:22-10.200.12.6:56314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:02.774720 systemd[1]: Started sshd@24-10.200.8.37:22-10.200.12.6:34954.service. Feb 9 19:08:02.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.37:22-10.200.12.6:34954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:02.782078 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:08:02.782158 kernel: audit: type=1130 audit(1707505682.774:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.37:22-10.200.12.6:34954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:03.391948 sshd[6540]: Accepted publickey for core from 10.200.12.6 port 34954 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:08:03.390000 audit[6540]: USER_ACCT pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.399611 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:08:03.398000 audit[6540]: CRED_ACQ pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.420472 systemd[1]: Started session-27.scope. Feb 9 19:08:03.421006 systemd-logind[1374]: New session 27 of user core. Feb 9 19:08:03.430509 kernel: audit: type=1101 audit(1707505683.390:503): pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.430710 kernel: audit: type=1103 audit(1707505683.398:504): pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.398000 audit[6540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0c4c9970 a2=3 a3=0 items=0 ppid=1 pid=6540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.465313 kernel: audit: type=1006 audit(1707505683.398:505): pid=6540 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 19:08:03.465407 kernel: audit: type=1300 audit(1707505683.398:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0c4c9970 a2=3 a3=0 items=0 ppid=1 pid=6540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:03.465429 kernel: audit: type=1327 audit(1707505683.398:505): proctitle=737368643A20636F7265205B707269765D Feb 9 19:08:03.398000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:08:03.422000 audit[6540]: USER_START pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.489963 kernel: audit: type=1105 audit(1707505683.422:506): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.490045 kernel: audit: type=1103 audit(1707505683.432:507): pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.432000 audit[6543]: CRED_ACQ pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.891550 sshd[6540]: pam_unix(sshd:session): session closed for user core Feb 9 19:08:03.892000 audit[6540]: USER_END pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.895995 systemd[1]: sshd@24-10.200.8.37:22-10.200.12.6:34954.service: Deactivated successfully. Feb 9 19:08:03.897542 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:08:03.903968 systemd-logind[1374]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:08:03.905027 systemd-logind[1374]: Removed session 27. Feb 9 19:08:03.892000 audit[6540]: CRED_DISP pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.929122 kernel: audit: type=1106 audit(1707505683.892:508): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.929217 kernel: audit: type=1104 audit(1707505683.892:509): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:03.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.37:22-10.200.12.6:34954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:09.020647 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:08:09.020874 kernel: audit: type=1130 audit(1707505688.996:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.37:22-10.200.12.6:48710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:08.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.37:22-10.200.12.6:48710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:08.997590 systemd[1]: Started sshd@25-10.200.8.37:22-10.200.12.6:48710.service. Feb 9 19:08:09.621000 audit[6552]: USER_ACCT pid=6552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.623291 sshd[6552]: Accepted publickey for core from 10.200.12.6 port 48710 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:08:09.642068 kernel: audit: type=1101 audit(1707505689.621:512): pid=6552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.642869 sshd[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:08:09.640000 audit[6552]: CRED_ACQ pid=6552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.659164 systemd[1]: Started session-28.scope. Feb 9 19:08:09.660736 systemd-logind[1374]: New session 28 of user core. Feb 9 19:08:09.677681 kernel: audit: type=1103 audit(1707505689.640:513): pid=6552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.677860 kernel: audit: type=1006 audit(1707505689.641:514): pid=6552 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 19:08:09.641000 audit[6552]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff94796620 a2=3 a3=0 items=0 ppid=1 pid=6552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:09.698724 kernel: audit: type=1300 audit(1707505689.641:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff94796620 a2=3 a3=0 items=0 ppid=1 pid=6552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:08:09.641000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:08:09.706499 kernel: audit: type=1327 audit(1707505689.641:514): proctitle=737368643A20636F7265205B707269765D Feb 9 19:08:09.706639 kernel: audit: type=1105 audit(1707505689.670:515): pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.670000 audit[6552]: USER_START pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.696000 audit[6555]: CRED_ACQ pid=6555 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.741650 kernel: audit: type=1103 audit(1707505689.696:516): pid=6555 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:09.924296 systemd[1]: run-containerd-runc-k8s.io-07b2935afbe085665de91fadda91290631a547b3a83389389afe4a5cc737227b-runc.wbc6kF.mount: Deactivated successfully. Feb 9 19:08:10.153343 sshd[6552]: pam_unix(sshd:session): session closed for user core Feb 9 19:08:10.154000 audit[6552]: USER_END pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:10.162696 systemd[1]: sshd@25-10.200.8.37:22-10.200.12.6:48710.service: Deactivated successfully. Feb 9 19:08:10.164248 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 19:08:10.165956 systemd-logind[1374]: Session 28 logged out. Waiting for processes to exit. Feb 9 19:08:10.167013 systemd-logind[1374]: Removed session 28. Feb 9 19:08:10.154000 audit[6552]: CRED_DISP pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:10.192678 kernel: audit: type=1106 audit(1707505690.154:517): pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:10.192899 kernel: audit: type=1104 audit(1707505690.154:518): pid=6552 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 19:08:10.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.37:22-10.200.12.6:48710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:08:17.449871 systemd[1]: run-containerd-runc-k8s.io-770f1a29a7e53e336c6910ee2e7795181969493d52db52a81590440a73a0aa72-runc.AvNORl.mount: Deactivated successfully. Feb 9 19:08:17.504153 systemd[1]: run-containerd-runc-k8s.io-6a25f3bd4812d50e764b86376a6802c5d57fba2b818af3abc768d2f540181924-runc.ZTK1Or.mount: Deactivated successfully. Feb 9 19:08:21.200168 systemd[1]: run-containerd-runc-k8s.io-c7c000097f86adda136d0795e8d237e73cbffb7932b37524fbf96b1351211828-runc.Ziu5VR.mount: Deactivated successfully. Feb 9 19:08:23.894865 kubelet[2584]: E0209 19:08:23.894770 2584 controller.go:189] failed to update lease, error: Put "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-00ed68a33d?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:24.164762 env[1412]: time="2024-02-09T19:08:24.153351893Z" level=info msg="shim disconnected" id=c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a Feb 9 19:08:24.164762 env[1412]: time="2024-02-09T19:08:24.153421793Z" level=warning msg="cleaning up after shim disconnected" id=c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a namespace=k8s.io Feb 9 19:08:24.164762 env[1412]: time="2024-02-09T19:08:24.153439393Z" level=info msg="cleaning up dead shim" Feb 9 19:08:24.161651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a-rootfs.mount: Deactivated successfully. Feb 9 19:08:24.167292 env[1412]: time="2024-02-09T19:08:24.167243228Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:08:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6694 runtime=io.containerd.runc.v2\n" Feb 9 19:08:25.194098 kubelet[2584]: I0209 19:08:25.192670 2584 scope.go:115] "RemoveContainer" containerID="c1856f876ff6267c449775e857a40b6582cdc769a847733451fcbf499400de3a" Feb 9 19:08:25.196960 env[1412]: time="2024-02-09T19:08:25.196912968Z" level=info msg="CreateContainer within sandbox \"4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 9 19:08:25.244152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153009892.mount: Deactivated successfully. Feb 9 19:08:25.257179 env[1412]: time="2024-02-09T19:08:25.257128553Z" level=info msg="CreateContainer within sandbox \"4ab48731786f17923daf29b2bad800872df37b62acf9d54538a5016713f6de1b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4f3c2c4fb68faf4b1b03617b11c2d52bc3c4a6077ff0c3f9f985b5db03158bbf\"" Feb 9 19:08:25.257742 env[1412]: time="2024-02-09T19:08:25.257705859Z" level=info msg="StartContainer for \"4f3c2c4fb68faf4b1b03617b11c2d52bc3c4a6077ff0c3f9f985b5db03158bbf\"" Feb 9 19:08:25.354205 env[1412]: time="2024-02-09T19:08:25.354130596Z" level=info msg="StartContainer for \"4f3c2c4fb68faf4b1b03617b11c2d52bc3c4a6077ff0c3f9f985b5db03158bbf\" returns successfully" Feb 9 19:08:25.414095 env[1412]: time="2024-02-09T19:08:25.414014578Z" level=info msg="shim disconnected" id=f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3 Feb 9 19:08:25.414371 env[1412]: time="2024-02-09T19:08:25.414348181Z" level=warning msg="cleaning up after shim disconnected" id=f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3 namespace=k8s.io Feb 9 19:08:25.414449 env[1412]: time="2024-02-09T19:08:25.414435982Z" level=info msg="cleaning up dead shim" Feb 9 19:08:25.437830 env[1412]: time="2024-02-09T19:08:25.437763809Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:08:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6758 runtime=io.containerd.runc.v2\n" Feb 9 19:08:26.196992 kubelet[2584]: I0209 19:08:26.196953 2584 scope.go:115] "RemoveContainer" containerID="f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3" Feb 9 19:08:26.200093 env[1412]: time="2024-02-09T19:08:26.200009009Z" level=info msg="CreateContainer within sandbox \"2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:08:26.233319 systemd[1]: run-containerd-runc-k8s.io-4f3c2c4fb68faf4b1b03617b11c2d52bc3c4a6077ff0c3f9f985b5db03158bbf-runc.OxYq81.mount: Deactivated successfully. Feb 9 19:08:26.233576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f93d12db9233dba8b3dec469e180ec2de060d96db9ef19c33560f6c5f78ffef3-rootfs.mount: Deactivated successfully. Feb 9 19:08:26.278001 env[1412]: time="2024-02-09T19:08:26.277916164Z" level=info msg="CreateContainer within sandbox \"2a4c836b6ffa2c14edffde8a01f8ead201cb186a720b4463f826c02ec57b1854\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d14a928d7d695093db101f773439761919a78cf1f7d2041839ab8de1c97514de\"" Feb 9 19:08:26.278805 env[1412]: time="2024-02-09T19:08:26.278769872Z" level=info msg="StartContainer for \"d14a928d7d695093db101f773439761919a78cf1f7d2041839ab8de1c97514de\"" Feb 9 19:08:26.380511 env[1412]: time="2024-02-09T19:08:26.380113353Z" level=info msg="StartContainer for \"d14a928d7d695093db101f773439761919a78cf1f7d2041839ab8de1c97514de\" returns successfully" Feb 9 19:08:28.502996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9-rootfs.mount: Deactivated successfully. Feb 9 19:08:28.504971 env[1412]: time="2024-02-09T19:08:28.504913845Z" level=info msg="shim disconnected" id=72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9 Feb 9 19:08:28.505519 env[1412]: time="2024-02-09T19:08:28.504974346Z" level=warning msg="cleaning up after shim disconnected" id=72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9 namespace=k8s.io Feb 9 19:08:28.505519 env[1412]: time="2024-02-09T19:08:28.504988046Z" level=info msg="cleaning up dead shim" Feb 9 19:08:28.516137 env[1412]: time="2024-02-09T19:08:28.516074852Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:08:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6818 runtime=io.containerd.runc.v2\n" Feb 9 19:08:28.933780 kubelet[2584]: E0209 19:08:28.932078 2584 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.37:34414->10.200.8.20:2379: read: connection timed out Feb 9 19:08:29.210926 kubelet[2584]: I0209 19:08:29.210626 2584 scope.go:115] "RemoveContainer" containerID="72948af5eb65728c8d519e937d34613cfc6fc1e632e39bcb2debf2d4616604f9" Feb 9 19:08:29.213095 env[1412]: time="2024-02-09T19:08:29.213049839Z" level=info msg="CreateContainer within sandbox \"f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:08:29.249698 env[1412]: time="2024-02-09T19:08:29.249633989Z" level=info msg="CreateContainer within sandbox \"f207cc9b3e8d9ad35806cfd2356aed3b1e11a10afdd23a84fce8e01d618e6a65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"050930ff880ddd5df667200399570b65a09b2267f1c192a5b72279a11e41471b\"" Feb 9 19:08:29.250432 env[1412]: time="2024-02-09T19:08:29.250390196Z" level=info msg="StartContainer for \"050930ff880ddd5df667200399570b65a09b2267f1c192a5b72279a11e41471b\"" Feb 9 19:08:29.354292 env[1412]: time="2024-02-09T19:08:29.354208289Z" level=info msg="StartContainer for \"050930ff880ddd5df667200399570b65a09b2267f1c192a5b72279a11e41471b\" returns successfully" Feb 9 19:08:29.381987 kubelet[2584]: E0209 19:08:29.380189 2584 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-00ed68a33d.17b24762edd58936", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-00ed68a33d", UID:"f23bb01936aabb8aca4745fdb21d530c", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-00ed68a33d"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 8, 18, 949400886, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 8, 18, 949400886, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.37:34238->10.200.8.20:2379: read: connection timed out' (will not retry!) Feb 9 19:08:29.503261 systemd[1]: run-containerd-runc-k8s.io-050930ff880ddd5df667200399570b65a09b2267f1c192a5b72279a11e41471b-runc.SS7OUI.mount: Deactivated successfully.