Jun 25 16:24:32.026429 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:24:32.026460 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:24:32.026474 kernel: BIOS-provided physical RAM map: Jun 25 16:24:32.026484 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 16:24:32.026494 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 16:24:32.026504 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 16:24:32.026516 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 16:24:32.026527 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 16:24:32.026537 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 16:24:32.026546 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 16:24:32.026555 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 16:24:32.026565 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 16:24:32.026574 kernel: printk: bootconsole [earlyser0] enabled Jun 25 16:24:32.026584 kernel: NX (Execute Disable) protection: active Jun 25 16:24:32.026599 kernel: efi: EFI v2.70 by Microsoft Jun 25 16:24:32.026610 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 Jun 25 16:24:32.026621 kernel: SMBIOS 3.1.0 present. Jun 25 16:24:32.026631 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 16:24:32.026642 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 16:24:32.026653 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 16:24:32.026663 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 16:24:32.026674 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 16:24:32.026686 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 16:24:32.026698 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 16:24:32.026714 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 16:24:32.026726 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 16:24:32.026739 kernel: tsc: Detected 2593.907 MHz processor Jun 25 16:24:32.026751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:24:32.026764 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:24:32.026776 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 16:24:32.026790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:24:32.026802 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 16:24:32.026814 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 16:24:32.026859 kernel: Using GB pages for direct mapping Jun 25 16:24:32.026871 kernel: Secure boot disabled Jun 25 16:24:32.026884 kernel: ACPI: Early table checksum verification disabled Jun 25 16:24:32.026897 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 16:24:32.026910 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.026921 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.026933 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 16:24:32.026950 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 16:24:32.026965 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.026977 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.026990 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.027000 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.027011 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.027024 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.027039 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:24:32.027052 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 16:24:32.027063 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 16:24:32.027075 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 16:24:32.027086 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 16:24:32.027098 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 16:24:32.027110 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 16:24:32.027122 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 16:24:32.027136 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 16:24:32.027148 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 16:24:32.027159 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 16:24:32.027170 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:24:32.027183 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:24:32.027195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 16:24:32.027207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 16:24:32.027218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 16:24:32.027229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 16:24:32.027244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 16:24:32.027255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 16:24:32.027267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 16:24:32.027278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 16:24:32.027290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 16:24:32.027302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 16:24:32.027313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 16:24:32.027325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 16:24:32.027337 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 16:24:32.027352 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 16:24:32.027364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 16:24:32.027376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 16:24:32.027388 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 16:24:32.027399 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 16:24:32.027411 kernel: Zone ranges: Jun 25 16:24:32.027425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:24:32.027438 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 16:24:32.027453 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 16:24:32.027468 kernel: Movable zone start for each node Jun 25 16:24:32.027481 kernel: Early memory node ranges Jun 25 16:24:32.027493 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 16:24:32.027505 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 16:24:32.027516 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 16:24:32.027527 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 16:24:32.027539 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 16:24:32.027550 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:24:32.027562 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 16:24:32.027576 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 16:24:32.027588 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 16:24:32.027600 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 16:24:32.027611 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:24:32.027623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:24:32.027635 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:24:32.027646 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 16:24:32.027658 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:24:32.027670 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 16:24:32.027685 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 16:24:32.027697 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:24:32.027708 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:24:32.027721 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:24:32.027732 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:24:32.027744 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:24:32.027756 kernel: Hyper-V: PV spinlocks enabled Jun 25 16:24:32.027768 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:24:32.027779 kernel: Fallback order for Node 0: 0 Jun 25 16:24:32.027794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 16:24:32.027805 kernel: Policy zone: Normal Jun 25 16:24:32.027830 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:24:32.027842 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:24:32.027852 kernel: random: crng init done Jun 25 16:24:32.027863 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 16:24:32.027874 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:24:32.027884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:24:32.027900 kernel: software IO TLB: area num 2. Jun 25 16:24:32.027924 kernel: Memory: 8065528K/8387460K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 321672K reserved, 0K cma-reserved) Jun 25 16:24:32.027942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:24:32.027954 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:24:32.027967 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:24:32.027978 kernel: Dynamic Preempt: voluntary Jun 25 16:24:32.027990 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:24:32.028003 kernel: rcu: RCU event tracing is enabled. Jun 25 16:24:32.028024 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:24:32.028036 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:24:32.028048 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:24:32.028063 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:24:32.033782 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:24:32.033797 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:24:32.033806 kernel: Using NULL legacy PIC Jun 25 16:24:32.033814 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 16:24:32.033890 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:24:32.033901 kernel: Console: colour dummy device 80x25 Jun 25 16:24:32.033910 kernel: printk: console [tty1] enabled Jun 25 16:24:32.033920 kernel: printk: console [ttyS0] enabled Jun 25 16:24:32.033930 kernel: printk: bootconsole [earlyser0] disabled Jun 25 16:24:32.033938 kernel: ACPI: Core revision 20220331 Jun 25 16:24:32.033949 kernel: Failed to register legacy timer interrupt Jun 25 16:24:32.033956 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:24:32.033967 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 16:24:32.033975 kernel: Hyper-V: Using IPI hypercalls Jun 25 16:24:32.033986 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 16:24:32.033996 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:24:32.034004 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:24:32.034013 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:24:32.034022 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:24:32.034029 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:24:32.034039 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:24:32.034047 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 16:24:32.034054 kernel: RETBleed: Vulnerable Jun 25 16:24:32.034067 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:24:32.034074 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:24:32.034081 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:24:32.034090 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:24:32.034099 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:24:32.034107 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:24:32.034114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:24:32.034121 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 16:24:32.034131 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 16:24:32.034140 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 16:24:32.034147 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:24:32.034157 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 16:24:32.034167 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 16:24:32.034174 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 16:24:32.034182 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 16:24:32.034193 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:24:32.034200 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:24:32.034208 kernel: LSM: Security Framework initializing Jun 25 16:24:32.034218 kernel: SELinux: Initializing. Jun 25 16:24:32.034226 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:24:32.034234 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:24:32.034243 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 16:24:32.034251 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:24:32.034264 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:24:32.034272 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:24:32.034280 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:24:32.034290 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:24:32.034297 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:24:32.034306 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 16:24:32.034316 kernel: signal: max sigframe size: 3632 Jun 25 16:24:32.034323 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:24:32.034333 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:24:32.034344 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:24:32.034355 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:24:32.034364 kernel: x86: Booting SMP configuration: Jun 25 16:24:32.034375 kernel: .... node #0, CPUs: #1 Jun 25 16:24:32.034384 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 16:24:32.034395 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:24:32.034403 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:24:32.034412 kernel: smpboot: Max logical packages: 1 Jun 25 16:24:32.034421 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 16:24:32.034432 kernel: devtmpfs: initialized Jun 25 16:24:32.034440 kernel: x86/mm: Memory block size: 128MB Jun 25 16:24:32.034448 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 16:24:32.034458 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:24:32.034466 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:24:32.034474 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:24:32.034481 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:24:32.034489 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:24:32.034499 kernel: audit: type=2000 audit(1719332671.031:1): state=initialized audit_enabled=0 res=1 Jun 25 16:24:32.034509 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:24:32.034517 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:24:32.034527 kernel: cpuidle: using governor menu Jun 25 16:24:32.034535 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:24:32.034543 kernel: dca service started, version 1.12.1 Jun 25 16:24:32.034553 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 16:24:32.034561 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:24:32.034568 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:24:32.034579 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:24:32.034589 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:24:32.034599 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:24:32.034608 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:24:32.034616 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:24:32.034626 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:24:32.034634 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:24:32.034642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:24:32.034652 kernel: ACPI: Interpreter enabled Jun 25 16:24:32.034660 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:24:32.034672 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:24:32.034680 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:24:32.034688 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 16:24:32.034696 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 16:24:32.034706 kernel: iommu: Default domain type: Translated Jun 25 16:24:32.034714 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:24:32.034722 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:24:32.034730 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:24:32.034740 kernel: PTP clock support registered Jun 25 16:24:32.034750 kernel: Registered efivars operations Jun 25 16:24:32.034758 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:24:32.034768 kernel: PCI: System does not support PCI Jun 25 16:24:32.034776 kernel: vgaarb: loaded Jun 25 16:24:32.034784 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 16:24:32.034794 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:24:32.034802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:24:32.034811 kernel: pnp: PnP ACPI init Jun 25 16:24:32.034837 kernel: pnp: PnP ACPI: found 3 devices Jun 25 16:24:32.034849 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:24:32.034860 kernel: NET: Registered PF_INET protocol family Jun 25 16:24:32.034869 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:24:32.034878 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 16:24:32.034888 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:24:32.034897 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:24:32.034906 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 16:24:32.034917 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 16:24:32.034925 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:24:32.034936 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:24:32.034946 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:24:32.034953 kernel: NET: Registered PF_XDP protocol family Jun 25 16:24:32.034961 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:24:32.034971 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 16:24:32.034979 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jun 25 16:24:32.034987 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:24:32.034997 kernel: Initialise system trusted keyrings Jun 25 16:24:32.035004 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 16:24:32.035014 kernel: Key type asymmetric registered Jun 25 16:24:32.035024 kernel: Asymmetric key parser 'x509' registered Jun 25 16:24:32.035032 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:24:32.035039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:24:32.035047 kernel: io scheduler mq-deadline registered Jun 25 16:24:32.035058 kernel: io scheduler kyber registered Jun 25 16:24:32.035066 kernel: io scheduler bfq registered Jun 25 16:24:32.035073 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:24:32.035084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:24:32.035094 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:24:32.035101 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 16:24:32.035111 kernel: i8042: PNP: No PS/2 controller found. Jun 25 16:24:32.035237 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 16:24:32.035316 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T16:24:31 UTC (1719332671) Jun 25 16:24:32.035391 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 16:24:32.035401 kernel: fail to initialize ptp_kvm Jun 25 16:24:32.035412 kernel: intel_pstate: CPU model not supported Jun 25 16:24:32.035420 kernel: efifb: probing for efifb Jun 25 16:24:32.035430 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 16:24:32.035438 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 16:24:32.035446 kernel: efifb: scrolling: redraw Jun 25 16:24:32.035455 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 16:24:32.035464 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:24:32.035471 kernel: fb0: EFI VGA frame buffer device Jun 25 16:24:32.035482 kernel: pstore: Registered efi as persistent store backend Jun 25 16:24:32.035493 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:24:32.035502 kernel: Segment Routing with IPv6 Jun 25 16:24:32.035511 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:24:32.035519 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:24:32.035529 kernel: Key type dns_resolver registered Jun 25 16:24:32.035537 kernel: IPI shorthand broadcast: enabled Jun 25 16:24:32.035545 kernel: sched_clock: Marking stable (942337900, 32593100)->(1208726500, -233795500) Jun 25 16:24:32.035553 kernel: registered taskstats version 1 Jun 25 16:24:32.035563 kernel: Loading compiled-in X.509 certificates Jun 25 16:24:32.035574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:24:32.035587 kernel: Key type .fscrypt registered Jun 25 16:24:32.035594 kernel: Key type fscrypt-provisioning registered Jun 25 16:24:32.035605 kernel: pstore: Using crash dump compression: deflate Jun 25 16:24:32.035613 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:24:32.035622 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:24:32.035631 kernel: ima: No architecture policies found Jun 25 16:24:32.035639 kernel: clk: Disabling unused clocks Jun 25 16:24:32.035649 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:24:32.035659 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:24:32.035670 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:24:32.035679 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:24:32.035688 kernel: Run /init as init process Jun 25 16:24:32.035697 kernel: with arguments: Jun 25 16:24:32.035704 kernel: /init Jun 25 16:24:32.035711 kernel: with environment: Jun 25 16:24:32.035719 kernel: HOME=/ Jun 25 16:24:32.035726 kernel: TERM=linux Jun 25 16:24:32.035736 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:24:32.035749 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:24:32.035761 systemd[1]: Detected virtualization microsoft. Jun 25 16:24:32.035770 systemd[1]: Detected architecture x86-64. Jun 25 16:24:32.035777 systemd[1]: Running in initrd. Jun 25 16:24:32.035788 systemd[1]: No hostname configured, using default hostname. Jun 25 16:24:32.035795 systemd[1]: Hostname set to . Jun 25 16:24:32.035803 systemd[1]: Initializing machine ID from random generator. Jun 25 16:24:32.035824 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:24:32.035833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:24:32.035844 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:24:32.035852 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:24:32.035860 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:24:32.035870 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:24:32.035879 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:24:32.035892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:24:32.035901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:24:32.035909 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:24:32.035920 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:24:32.035929 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:24:32.035938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:24:32.035949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:24:32.035956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:24:32.035967 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:24:32.035978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:24:32.035986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:24:32.035996 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:24:32.036005 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:24:32.036016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:24:32.036026 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:24:32.036040 systemd-journald[178]: Journal started Jun 25 16:24:32.036090 systemd-journald[178]: Runtime Journal (/run/log/journal/3eba28486eea44078f3cf33f29d3b115) is 8.0M, max 158.8M, 150.8M free. Jun 25 16:24:32.043663 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:24:32.043654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:24:32.050292 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:24:32.063422 kernel: audit: type=1130 audit(1719332672.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.061426 systemd-modules-load[179]: Inserted module 'overlay' Jun 25 16:24:32.063658 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:24:32.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.087577 kernel: audit: type=1130 audit(1719332672.049:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.087611 kernel: audit: type=1130 audit(1719332672.062:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.089205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:24:32.096501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:24:32.104650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:24:32.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.121127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:24:32.122611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:24:32.133336 kernel: audit: type=1130 audit(1719332672.070:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.138408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:24:32.149907 kernel: audit: type=1130 audit(1719332672.120:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.159594 kernel: audit: type=1130 audit(1719332672.121:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.159631 kernel: audit: type=1334 audit(1719332672.122:8): prog-id=6 op=LOAD Jun 25 16:24:32.122000 audit: BPF prog-id=6 op=LOAD Jun 25 16:24:32.165229 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:24:32.176619 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 25 16:24:32.177805 kernel: Bridge firewalling registered Jun 25 16:24:32.177691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:24:32.191670 kernel: audit: type=1130 audit(1719332672.179:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.188988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:24:32.209879 dracut-cmdline[198]: dracut-dracut-053 Jun 25 16:24:32.217207 dracut-cmdline[198]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:24:32.217722 systemd-resolved[189]: Positive Trust Anchors: Jun 25 16:24:32.217734 systemd-resolved[189]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:24:32.217772 systemd-resolved[189]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:24:32.220679 systemd-resolved[189]: Defaulting to hostname 'linux'. Jun 25 16:24:32.221586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:24:32.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.260403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:24:32.275382 kernel: audit: type=1130 audit(1719332672.259:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.279376 kernel: SCSI subsystem initialized Jun 25 16:24:32.301019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:24:32.301108 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:24:32.304729 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:24:32.309741 systemd-modules-load[179]: Inserted module 'dm_multipath' Jun 25 16:24:32.310884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:24:32.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.322033 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:24:32.332664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:24:32.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.360844 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:24:32.373845 kernel: iscsi: registered transport (tcp) Jun 25 16:24:32.397551 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:24:32.397640 kernel: QLogic iSCSI HBA Driver Jun 25 16:24:32.432333 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:24:32.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.444056 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:24:32.512859 kernel: raid6: avx512x4 gen() 18227 MB/s Jun 25 16:24:32.531835 kernel: raid6: avx512x2 gen() 17938 MB/s Jun 25 16:24:32.550830 kernel: raid6: avx512x1 gen() 18013 MB/s Jun 25 16:24:32.570838 kernel: raid6: avx2x4 gen() 17840 MB/s Jun 25 16:24:32.589835 kernel: raid6: avx2x2 gen() 17922 MB/s Jun 25 16:24:32.611256 kernel: raid6: avx2x1 gen() 13882 MB/s Jun 25 16:24:32.611291 kernel: raid6: using algorithm avx512x4 gen() 18227 MB/s Jun 25 16:24:32.632773 kernel: raid6: .... xor() 7222 MB/s, rmw enabled Jun 25 16:24:32.632799 kernel: raid6: using avx512x2 recovery algorithm Jun 25 16:24:32.638846 kernel: xor: automatically using best checksumming function avx Jun 25 16:24:32.779847 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:24:32.788281 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:24:32.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.793000 audit: BPF prog-id=7 op=LOAD Jun 25 16:24:32.793000 audit: BPF prog-id=8 op=LOAD Jun 25 16:24:32.797029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:24:32.814082 systemd-udevd[380]: Using default interface naming scheme 'v252'. Jun 25 16:24:32.818869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:24:32.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.830025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:24:32.844623 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Jun 25 16:24:32.876309 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:24:32.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.883078 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:24:32.921368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:24:32.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:32.978841 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:24:33.001525 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:24:33.001596 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 16:24:33.005122 kernel: AES CTR mode by8 optimization enabled Jun 25 16:24:33.020838 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 16:24:33.020883 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 16:24:33.029642 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 16:24:33.034837 kernel: scsi host0: storvsc_host_t Jun 25 16:24:33.041105 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 16:24:33.041145 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 16:24:33.043837 kernel: scsi host1: storvsc_host_t Jun 25 16:24:33.046835 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 16:24:33.051862 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 16:24:33.070853 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 16:24:33.082823 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 16:24:33.082879 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 16:24:33.097831 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 16:24:33.099877 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:24:33.099901 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 16:24:33.109627 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 16:24:33.123212 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 16:24:33.123393 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 16:24:33.123555 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 16:24:33.123726 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 16:24:33.123913 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:24:33.123933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 16:24:33.229644 kernel: hv_netvsc 000d3ab1-4dde-000d-3ab1-4dde000d3ab1 eth0: VF slot 1 added Jun 25 16:24:33.239001 kernel: hv_vmbus: registering driver hv_pci Jun 25 16:24:33.244836 kernel: hv_pci 828690d3-2443-46d2-b957-3bc6903e5428: PCI VMBus probing: Using version 0x10004 Jun 25 16:24:33.291562 kernel: hv_pci 828690d3-2443-46d2-b957-3bc6903e5428: PCI host bridge to bus 2443:00 Jun 25 16:24:33.291751 kernel: pci_bus 2443:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 16:24:33.291932 kernel: pci_bus 2443:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 16:24:33.292069 kernel: pci 2443:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 16:24:33.292236 kernel: pci 2443:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 16:24:33.292389 kernel: pci 2443:00:02.0: enabling Extended Tags Jun 25 16:24:33.292538 kernel: pci 2443:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2443:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 16:24:33.292692 kernel: pci_bus 2443:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 16:24:33.292844 kernel: pci 2443:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 16:24:33.456556 kernel: mlx5_core 2443:00:02.0: enabling device (0000 -> 0002) Jun 25 16:24:33.739060 kernel: mlx5_core 2443:00:02.0: firmware version: 14.30.1284 Jun 25 16:24:33.739252 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (431) Jun 25 16:24:33.739273 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (432) Jun 25 16:24:33.739292 kernel: mlx5_core 2443:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jun 25 16:24:33.739447 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:24:33.739464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:24:33.739478 kernel: mlx5_core 2443:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jun 25 16:24:33.739628 kernel: hv_netvsc 000d3ab1-4dde-000d-3ab1-4dde000d3ab1 eth0: VF registering: eth1 Jun 25 16:24:33.739768 kernel: mlx5_core 2443:00:02.0 eth1: joined to eth0 Jun 25 16:24:33.464118 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 16:24:33.503498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 16:24:33.639598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 16:24:33.647884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 16:24:33.651017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 16:24:33.665033 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:24:33.779436 kernel: mlx5_core 2443:00:02.0 enP9283s1: renamed from eth1 Jun 25 16:24:34.692213 disk-uuid[571]: The operation has completed successfully. Jun 25 16:24:34.694971 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:24:34.792313 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:24:34.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:34.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:34.792436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:24:34.811015 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:24:34.820200 sh[657]: Success Jun 25 16:24:34.848843 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:24:35.041344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:24:35.054296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:24:35.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.057227 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:24:35.074842 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:24:35.074882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:24:35.081106 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:24:35.084130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:24:35.086871 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:24:35.387217 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:24:35.390463 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:24:35.406094 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:24:35.417182 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:24:35.430214 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:24:35.430241 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:24:35.430252 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:24:35.467212 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:24:35.494842 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:24:35.499454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:24:35.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.507708 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:24:35.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.519184 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:24:35.526000 audit: BPF prog-id=9 op=LOAD Jun 25 16:24:35.527632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:24:35.553007 systemd-networkd[839]: lo: Link UP Jun 25 16:24:35.553017 systemd-networkd[839]: lo: Gained carrier Jun 25 16:24:35.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.553552 systemd-networkd[839]: Enumeration completed Jun 25 16:24:35.553646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:24:35.556933 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:35.556938 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:24:35.558464 systemd[1]: Reached target network.target - Network. Jun 25 16:24:35.586024 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:24:35.594264 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:24:35.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.601387 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:24:35.606844 iscsid[844]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:24:35.606844 iscsid[844]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:24:35.606844 iscsid[844]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:24:35.606844 iscsid[844]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:24:35.606844 iscsid[844]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:24:35.606844 iscsid[844]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:24:35.638884 kernel: mlx5_core 2443:00:02.0 enP9283s1: Link up Jun 25 16:24:35.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.607669 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:24:35.633053 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:24:35.653843 kernel: hv_netvsc 000d3ab1-4dde-000d-3ab1-4dde000d3ab1 eth0: Data path switched to VF: enP9283s1 Jun 25 16:24:35.653947 systemd-networkd[839]: enP9283s1: Link UP Jun 25 16:24:35.654074 systemd-networkd[839]: eth0: Link UP Jun 25 16:24:35.658287 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:24:35.666677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:35.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.660869 systemd-networkd[839]: eth0: Gained carrier Jun 25 16:24:35.660880 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:35.663905 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:24:35.679250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:24:35.685370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:24:35.688886 systemd-networkd[839]: enP9283s1: Gained carrier Jun 25 16:24:35.697414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:24:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.707847 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:24:35.733463 systemd-networkd[839]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:24:36.236703 ignition[838]: Ignition 2.15.0 Jun 25 16:24:36.236717 ignition[838]: Stage: fetch-offline Jun 25 16:24:36.246011 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:24:36.246044 kernel: audit: type=1130 audit(1719332676.241:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.238248 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:24:36.236771 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:36.257090 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:24:36.236784 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:36.236937 ignition[838]: parsed url from cmdline: "" Jun 25 16:24:36.236943 ignition[838]: no config URL provided Jun 25 16:24:36.236951 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:24:36.236965 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:24:36.236972 ignition[838]: failed to fetch config: resource requires networking Jun 25 16:24:36.237372 ignition[838]: Ignition finished successfully Jun 25 16:24:36.281990 ignition[863]: Ignition 2.15.0 Jun 25 16:24:36.282004 ignition[863]: Stage: fetch Jun 25 16:24:36.282169 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:36.282179 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:36.282290 ignition[863]: parsed url from cmdline: "" Jun 25 16:24:36.282293 ignition[863]: no config URL provided Jun 25 16:24:36.282298 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:24:36.282306 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:24:36.282331 ignition[863]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 16:24:36.374110 ignition[863]: GET result: OK Jun 25 16:24:36.374292 ignition[863]: config has been read from IMDS userdata Jun 25 16:24:36.374328 ignition[863]: parsing config with SHA512: bfd66bd7a71149ff7581f2642cc9d47e9e93a55f2d10eb6cb772d3ffaa3416f259a46d89ee8e3f2fcecb8eaedbc9470bff8ae3516814ca36e26037d4538ec8d9 Jun 25 16:24:36.383369 unknown[863]: fetched base config from "system" Jun 25 16:24:36.383382 unknown[863]: fetched base config from "system" Jun 25 16:24:36.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.383805 ignition[863]: fetch: fetch complete Jun 25 16:24:36.400400 kernel: audit: type=1130 audit(1719332676.387:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.383389 unknown[863]: fetched user config from "azure" Jun 25 16:24:36.383810 ignition[863]: fetch: fetch passed Jun 25 16:24:36.385223 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:24:36.383869 ignition[863]: Ignition finished successfully Jun 25 16:24:36.402684 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:24:36.426113 ignition[869]: Ignition 2.15.0 Jun 25 16:24:36.426125 ignition[869]: Stage: kargs Jun 25 16:24:36.426257 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:36.426271 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:36.427251 ignition[869]: kargs: kargs passed Jun 25 16:24:36.427296 ignition[869]: Ignition finished successfully Jun 25 16:24:36.437093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:24:36.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.453836 kernel: audit: type=1130 audit(1719332676.444:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.455090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:24:36.473567 ignition[875]: Ignition 2.15.0 Jun 25 16:24:36.473629 ignition[875]: Stage: disks Jun 25 16:24:36.473773 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:36.473787 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:36.482719 ignition[875]: disks: disks passed Jun 25 16:24:36.482892 ignition[875]: Ignition finished successfully Jun 25 16:24:36.487267 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:24:36.499316 kernel: audit: type=1130 audit(1719332676.489:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.498628 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:24:36.505279 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:24:36.511556 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:24:36.511657 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:24:36.512545 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:24:36.528087 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:24:36.574615 systemd-fsck[883]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 16:24:36.580470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:24:36.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.594854 kernel: audit: type=1130 audit(1719332676.583:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:36.596028 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:24:36.682850 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:24:36.683486 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:24:36.688005 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:24:36.725097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:24:36.730967 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:24:36.737384 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 16:24:36.747814 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (892) Jun 25 16:24:36.743144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:24:36.743192 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:24:36.761441 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:24:36.770022 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:24:36.770052 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:24:36.770066 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:24:36.783473 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:24:36.790463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:24:36.946203 systemd-networkd[839]: eth0: Gained IPv6LL Jun 25 16:24:37.224177 coreos-metadata[894]: Jun 25 16:24:37.224 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 16:24:37.228796 coreos-metadata[894]: Jun 25 16:24:37.226 INFO Fetch successful Jun 25 16:24:37.228796 coreos-metadata[894]: Jun 25 16:24:37.226 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 16:24:37.239131 coreos-metadata[894]: Jun 25 16:24:37.239 INFO Fetch successful Jun 25 16:24:37.249983 coreos-metadata[894]: Jun 25 16:24:37.249 INFO wrote hostname ci-3815.2.4-a-a46e2cd05c to /sysroot/etc/hostname Jun 25 16:24:37.256230 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:24:37.272376 kernel: audit: type=1130 audit(1719332677.258:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.312151 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:24:37.330284 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:24:37.338239 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:24:37.353615 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:24:37.935454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:24:37.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.950838 kernel: audit: type=1130 audit(1719332677.940:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.951043 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:24:37.958540 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:24:37.964445 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:24:37.968140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:24:37.989697 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:24:38.006553 kernel: audit: type=1130 audit(1719332677.992:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:37.996027 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:24:38.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:38.015595 ignition[1008]: INFO : Ignition 2.15.0 Jun 25 16:24:38.015595 ignition[1008]: INFO : Stage: mount Jun 25 16:24:38.015595 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:38.015595 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:38.015595 ignition[1008]: INFO : mount: mount passed Jun 25 16:24:38.015595 ignition[1008]: INFO : Ignition finished successfully Jun 25 16:24:38.039746 kernel: audit: type=1130 audit(1719332678.005:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:38.039780 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1017) Jun 25 16:24:38.039794 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:24:38.017951 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:24:38.049214 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:24:38.049240 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:24:38.025690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:24:38.057488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:24:38.081668 ignition[1035]: INFO : Ignition 2.15.0 Jun 25 16:24:38.083984 ignition[1035]: INFO : Stage: files Jun 25 16:24:38.083984 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:38.083984 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:38.092817 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:24:38.096608 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:24:38.100599 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:24:38.160135 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:24:38.164877 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:24:38.164877 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:24:38.164877 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:24:38.164877 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:24:38.160645 unknown[1035]: wrote ssh authorized keys file for user: core Jun 25 16:24:38.263066 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:24:38.373963 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:38.379837 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:24:39.003775 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:24:39.392004 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:39.392004 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:24:39.404686 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:24:39.409895 ignition[1035]: INFO : files: files passed Jun 25 16:24:39.409895 ignition[1035]: INFO : Ignition finished successfully Jun 25 16:24:39.451138 kernel: audit: type=1130 audit(1719332679.412:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.406440 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:24:39.457114 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:24:39.466742 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:24:39.472854 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:24:39.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.472967 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:24:39.482469 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:39.482469 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:39.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.493958 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:39.484519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:24:39.489533 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:24:39.514173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:24:39.527410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:24:39.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.527506 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:24:39.533793 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:24:39.540181 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:24:39.543328 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:24:39.558246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:24:39.572581 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:24:39.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.580086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:24:39.593045 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:24:39.599059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:24:39.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.599310 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:24:39.599700 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:24:39.599843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:24:39.600313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:24:39.670955 iscsid[844]: iscsid shutting down. Jun 25 16:24:39.600655 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:24:39.601121 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:24:39.601553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:24:39.602111 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:24:39.694083 ignition[1079]: INFO : Ignition 2.15.0 Jun 25 16:24:39.694083 ignition[1079]: INFO : Stage: umount Jun 25 16:24:39.694083 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:39.694083 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:24:39.694083 ignition[1079]: INFO : umount: umount passed Jun 25 16:24:39.694083 ignition[1079]: INFO : Ignition finished successfully Jun 25 16:24:39.602559 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:24:39.603014 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:24:39.603710 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:24:39.604200 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:24:39.604639 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:24:39.605177 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:24:39.605598 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:24:39.605713 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:24:39.606283 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:24:39.606608 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:24:39.606718 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:24:39.607176 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:24:39.607295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:24:39.607581 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:24:39.607690 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:24:39.608046 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 16:24:39.608159 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:24:39.662035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:24:39.670953 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:24:39.688649 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:24:39.698767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:24:39.717187 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:24:39.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.771335 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:24:39.771491 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:24:39.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.782406 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:24:39.783166 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:24:39.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.783255 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:24:39.791328 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:24:39.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.791412 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:24:39.804670 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:24:39.804764 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:24:39.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.813959 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:24:39.814172 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:24:39.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.822050 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:24:39.822109 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:24:39.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.827331 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:24:39.827377 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:24:39.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.827479 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:24:39.827517 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:24:39.840810 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:24:39.846254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:24:39.854854 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:24:39.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.854945 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:24:39.855412 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:24:39.856348 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:24:39.856390 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:24:39.856798 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:24:39.856846 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:24:39.857556 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:24:39.857590 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:24:39.858902 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:24:39.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.859626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:24:39.859718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:24:39.861466 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:24:39.861558 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:24:39.861788 systemd[1]: Stopped target network.target - Network. Jun 25 16:24:39.862153 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:24:39.862188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:24:39.862712 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:24:39.863213 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:24:39.909229 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:24:39.909334 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:24:39.909864 systemd-networkd[839]: eth0: DHCPv6 lease lost Jun 25 16:24:39.920815 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:24:39.920933 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:24:39.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.974931 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:24:39.976000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:24:39.976000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:24:39.974977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:24:39.987981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:24:39.991646 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:24:39.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.991710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:24:40.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:39.995903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:24:39.995949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:24:39.999174 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:24:39.999215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:24:40.005101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:24:40.005143 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:24:40.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.027845 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:24:40.032625 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:24:40.032720 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:24:40.046990 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:24:40.047151 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:24:40.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.053923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:24:40.053967 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:24:40.066089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:24:40.066149 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:24:40.079487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:24:40.079551 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:24:40.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.087909 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:24:40.087959 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:24:40.100262 kernel: hv_netvsc 000d3ab1-4dde-000d-3ab1-4dde000d3ab1 eth0: Data path switched from VF: enP9283s1 Jun 25 16:24:40.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.100276 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:24:40.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.100331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:24:40.116036 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:24:40.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.118905 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:24:40.118971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:24:40.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.122270 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:24:40.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.122319 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:24:40.128611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:24:40.128665 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:24:40.135444 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:24:40.135963 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:24:40.136054 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:24:40.141082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:24:40.141167 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:24:40.146614 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:24:40.182062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:24:40.192200 systemd[1]: Switching root. Jun 25 16:24:40.215940 systemd-journald[178]: Journal stopped Jun 25 16:24:44.955149 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jun 25 16:24:44.955176 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:24:44.955188 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:24:44.955196 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:24:44.955204 kernel: SELinux: policy capability open_perms=1 Jun 25 16:24:44.955212 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:24:44.955223 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:24:44.955234 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:24:44.955242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:24:44.955250 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:24:44.955258 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:24:44.955267 systemd[1]: Successfully loaded SELinux policy in 151.597ms. Jun 25 16:24:44.955277 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.466ms. Jun 25 16:24:44.955288 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:24:44.955300 systemd[1]: Detected virtualization microsoft. Jun 25 16:24:44.955309 systemd[1]: Detected architecture x86-64. Jun 25 16:24:44.955318 systemd[1]: Detected first boot. Jun 25 16:24:44.955328 systemd[1]: Hostname set to . Jun 25 16:24:44.955337 systemd[1]: Initializing machine ID from random generator. Jun 25 16:24:44.955348 kernel: kauditd_printk_skb: 45 callbacks suppressed Jun 25 16:24:44.955357 kernel: audit: type=1334 audit(1719332681.527:86): prog-id=10 op=LOAD Jun 25 16:24:44.955365 kernel: audit: type=1334 audit(1719332681.527:87): prog-id=10 op=UNLOAD Jun 25 16:24:44.955374 kernel: audit: type=1334 audit(1719332681.527:88): prog-id=11 op=LOAD Jun 25 16:24:44.955383 kernel: audit: type=1334 audit(1719332681.527:89): prog-id=11 op=UNLOAD Jun 25 16:24:44.955391 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:24:44.955400 kernel: audit: type=1334 audit(1719332684.471:90): prog-id=12 op=LOAD Jun 25 16:24:44.955411 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:24:44.955421 kernel: audit: type=1334 audit(1719332684.471:91): prog-id=3 op=UNLOAD Jun 25 16:24:44.955430 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:24:44.955439 kernel: audit: type=1334 audit(1719332684.471:92): prog-id=13 op=LOAD Jun 25 16:24:44.955448 kernel: audit: type=1334 audit(1719332684.471:93): prog-id=14 op=LOAD Jun 25 16:24:44.955456 kernel: audit: type=1334 audit(1719332684.471:94): prog-id=4 op=UNLOAD Jun 25 16:24:44.955465 kernel: audit: type=1334 audit(1719332684.471:95): prog-id=5 op=UNLOAD Jun 25 16:24:44.955473 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:24:44.955483 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:24:44.955495 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:24:44.955505 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:24:44.955514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:24:44.955524 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:24:44.955533 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:24:44.955546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:24:44.955555 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:24:44.955567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:24:44.955577 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:24:44.955587 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:24:44.955596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:24:44.955606 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:24:44.955616 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:24:44.955626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:24:44.955636 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:24:44.955646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:24:44.955658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:24:44.955668 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:24:44.955678 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:24:44.955687 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:24:44.955697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:24:44.955707 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:24:44.955716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:24:44.955728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:24:44.955738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:24:44.955747 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:24:44.955757 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:24:44.955767 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:24:44.955779 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:24:44.955789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:44.955799 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:24:44.955809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:24:44.955826 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:24:44.955837 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:24:44.955851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:44.955861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:24:44.955873 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:24:44.955884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:44.955893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:24:44.955904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:44.955913 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:24:44.955924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:44.955934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:24:44.955944 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:24:44.955956 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:24:44.955966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:24:44.955976 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:24:44.955986 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:24:44.955996 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:24:44.956006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:24:44.956016 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:24:44.956026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:24:44.956038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:24:44.956048 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:24:44.956058 systemd[1]: Stopped verity-setup.service. Jun 25 16:24:44.956069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:44.956079 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:24:44.956092 systemd-journald[1185]: Journal started Jun 25 16:24:44.956130 systemd-journald[1185]: Runtime Journal (/run/log/journal/7e2b7a8ae2894068950f73d9a1bd2b42) is 8.0M, max 158.8M, 150.8M free. Jun 25 16:24:41.111000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:24:41.527000 audit: BPF prog-id=10 op=LOAD Jun 25 16:24:41.527000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:24:41.527000 audit: BPF prog-id=11 op=LOAD Jun 25 16:24:41.527000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:24:44.471000 audit: BPF prog-id=12 op=LOAD Jun 25 16:24:44.471000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:24:44.471000 audit: BPF prog-id=13 op=LOAD Jun 25 16:24:44.471000 audit: BPF prog-id=14 op=LOAD Jun 25 16:24:44.471000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:24:44.471000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:24:44.471000 audit: BPF prog-id=15 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:24:44.472000 audit: BPF prog-id=16 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=17 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:24:44.472000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:24:44.472000 audit: BPF prog-id=18 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:24:44.472000 audit: BPF prog-id=19 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=20 op=LOAD Jun 25 16:24:44.472000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:24:44.472000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:24:44.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.484000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:24:44.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.861000 audit: BPF prog-id=21 op=LOAD Jun 25 16:24:44.861000 audit: BPF prog-id=22 op=LOAD Jun 25 16:24:44.861000 audit: BPF prog-id=23 op=LOAD Jun 25 16:24:44.861000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:24:44.861000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:24:44.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.462122 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:24:44.462134 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:24:44.473915 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:24:44.948000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:24:44.948000 audit[1185]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe954dafa0 a2=4000 a3=7ffe954db03c items=0 ppid=1 pid=1185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:44.948000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:24:44.984859 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:24:44.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:44.989292 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:24:44.992798 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:24:44.995810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:24:45.001336 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:24:45.005254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:24:45.009456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:24:45.013661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:45.013940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:45.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.021839 kernel: fuse: init (API version 7.37) Jun 25 16:24:45.026391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:45.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.026745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:45.030476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:24:45.030649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:24:45.034151 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:24:45.037678 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:24:45.042052 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:24:45.049944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:24:45.053386 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:24:45.068154 kernel: loop: module loaded Jun 25 16:24:45.070077 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:24:45.075336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:24:45.078335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:45.080584 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:24:45.085589 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:24:45.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.086748 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:24:45.090779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:45.090981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:45.094607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:24:45.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.111087 systemd-journald[1185]: Time spent on flushing to /var/log/journal/7e2b7a8ae2894068950f73d9a1bd2b42 is 32.980ms for 1086 entries. Jun 25 16:24:45.111087 systemd-journald[1185]: System Journal (/var/log/journal/7e2b7a8ae2894068950f73d9a1bd2b42) is 8.0M, max 2.6G, 2.6G free. Jun 25 16:24:45.190803 systemd-journald[1185]: Received client request to flush runtime journal. Jun 25 16:24:45.190882 kernel: ACPI: bus type drm_connector registered Jun 25 16:24:45.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.101482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:24:45.105180 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:24:45.120014 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:24:45.123191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:45.192216 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:24:45.125237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:24:45.132961 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:24:45.138190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:24:45.143319 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:24:45.148511 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:24:45.160744 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:24:45.160915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:24:45.191278 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:24:45.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.195240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:24:45.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.203085 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:24:45.211328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:24:45.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.402251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:24:45.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:45.410041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:24:45.479569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:24:45.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.604844 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:24:46.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.609984 kernel: kauditd_printk_skb: 53 callbacks suppressed Jun 25 16:24:46.610012 kernel: audit: type=1130 audit(1719332686.607:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.607000 audit: BPF prog-id=24 op=LOAD Jun 25 16:24:46.623392 kernel: audit: type=1334 audit(1719332686.607:148): prog-id=24 op=LOAD Jun 25 16:24:46.607000 audit: BPF prog-id=25 op=LOAD Jun 25 16:24:46.626743 kernel: audit: type=1334 audit(1719332686.607:149): prog-id=25 op=LOAD Jun 25 16:24:46.607000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:24:46.629920 kernel: audit: type=1334 audit(1719332686.607:150): prog-id=7 op=UNLOAD Jun 25 16:24:46.607000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:24:46.632891 kernel: audit: type=1334 audit(1719332686.607:151): prog-id=8 op=UNLOAD Jun 25 16:24:46.635148 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:24:46.663451 systemd-udevd[1233]: Using default interface naming scheme 'v252'. Jun 25 16:24:46.774003 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:24:46.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.787850 kernel: audit: type=1130 audit(1719332686.776:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.789053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:24:46.777000 audit: BPF prog-id=26 op=LOAD Jun 25 16:24:46.795952 kernel: audit: type=1334 audit(1719332686.777:153): prog-id=26 op=LOAD Jun 25 16:24:46.824170 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:24:46.864844 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1238) Jun 25 16:24:46.902000 audit: BPF prog-id=27 op=LOAD Jun 25 16:24:46.907879 kernel: audit: type=1334 audit(1719332686.902:154): prog-id=27 op=LOAD Jun 25 16:24:46.907966 kernel: audit: type=1334 audit(1719332686.902:155): prog-id=28 op=LOAD Jun 25 16:24:46.902000 audit: BPF prog-id=28 op=LOAD Jun 25 16:24:46.918468 kernel: audit: type=1334 audit(1719332686.902:156): prog-id=29 op=LOAD Jun 25 16:24:46.902000 audit: BPF prog-id=29 op=LOAD Jun 25 16:24:46.912037 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:24:46.931072 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 16:24:46.931275 kernel: hv_vmbus: registering driver hv_utils Jun 25 16:24:46.938509 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 16:24:46.938579 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 16:24:46.938600 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 16:24:46.938617 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:24:47.853619 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 16:24:47.870321 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 16:24:47.870410 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 16:24:47.875607 kernel: Console: switching to colour dummy device 80x25 Jun 25 16:24:47.882330 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:24:47.888623 kernel: hv_vmbus: registering driver hv_balloon Jun 25 16:24:47.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.901877 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:24:47.922735 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 16:24:48.041575 systemd-networkd[1234]: lo: Link UP Jun 25 16:24:48.042424 systemd-networkd[1234]: lo: Gained carrier Jun 25 16:24:48.043197 systemd-networkd[1234]: Enumeration completed Jun 25 16:24:48.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:48.043779 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:24:48.052169 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:48.052260 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:24:48.052846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:24:48.073617 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1237) Jun 25 16:24:48.120676 kernel: mlx5_core 2443:00:02.0 enP9283s1: Link up Jun 25 16:24:48.138620 kernel: hv_netvsc 000d3ab1-4dde-000d-3ab1-4dde000d3ab1 eth0: Data path switched to VF: enP9283s1 Jun 25 16:24:48.140697 systemd-networkd[1234]: enP9283s1: Link UP Jun 25 16:24:48.141452 systemd-networkd[1234]: eth0: Link UP Jun 25 16:24:48.141550 systemd-networkd[1234]: eth0: Gained carrier Jun 25 16:24:48.141678 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:48.147093 systemd-networkd[1234]: enP9283s1: Gained carrier Jun 25 16:24:48.169755 systemd-networkd[1234]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:24:48.172705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 16:24:48.325626 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jun 25 16:24:48.351068 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:24:48.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:48.361866 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:24:48.425352 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:24:48.452690 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:24:48.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:48.457984 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:24:48.467867 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:24:48.472480 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:24:48.498724 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:24:48.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:48.502759 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:24:48.506546 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:24:48.506581 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:24:48.509791 systemd[1]: Reached target machines.target - Containers. Jun 25 16:24:48.518822 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:24:48.533018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:48.533096 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:48.534497 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:24:48.539598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:24:48.544170 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:24:48.549650 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:24:48.555221 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1317 (bootctl) Jun 25 16:24:48.561835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:24:48.587610 kernel: loop0: detected capacity change from 0 to 55560 Jun 25 16:24:49.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:49.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:49.003466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:24:49.734843 systemd-networkd[1234]: eth0: Gained IPv6LL Jun 25 16:24:49.740562 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:24:50.853923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:24:50.854668 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:24:50.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:50.978054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:24:51.030616 kernel: loop1: detected capacity change from 0 to 80584 Jun 25 16:24:51.323658 systemd-fsck[1326]: fsck.fat 4.2 (2021-01-31) Jun 25 16:24:51.323658 systemd-fsck[1326]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:24:51.326214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:24:51.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:51.332823 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:24:51.352708 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:24:51.367762 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:24:51.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:51.374612 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 16:24:51.402630 kernel: loop3: detected capacity change from 0 to 139360 Jun 25 16:24:51.750635 kernel: loop4: detected capacity change from 0 to 55560 Jun 25 16:24:51.756616 kernel: loop5: detected capacity change from 0 to 80584 Jun 25 16:24:51.765616 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 16:24:51.772623 kernel: loop7: detected capacity change from 0 to 139360 Jun 25 16:24:51.779669 (sd-sysext)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 16:24:51.780180 (sd-sysext)[1337]: Merged extensions into '/usr'. Jun 25 16:24:51.781941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:24:51.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:51.790860 systemd[1]: Starting ensure-sysext.service... Jun 25 16:24:51.795246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:24:51.811993 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:24:51.813761 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:24:51.814234 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:24:51.815457 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:24:51.822711 systemd[1]: Reloading. Jun 25 16:24:52.034063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:52.107000 audit: BPF prog-id=30 op=LOAD Jun 25 16:24:52.107000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:24:52.108000 audit: BPF prog-id=31 op=LOAD Jun 25 16:24:52.108000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:24:52.108000 audit: BPF prog-id=32 op=LOAD Jun 25 16:24:52.108000 audit: BPF prog-id=33 op=LOAD Jun 25 16:24:52.108000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:24:52.108000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:24:52.108000 audit: BPF prog-id=34 op=LOAD Jun 25 16:24:52.108000 audit: BPF prog-id=35 op=LOAD Jun 25 16:24:52.108000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:24:52.108000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:24:52.110000 audit: BPF prog-id=36 op=LOAD Jun 25 16:24:52.110000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:24:52.110000 audit: BPF prog-id=37 op=LOAD Jun 25 16:24:52.110000 audit: BPF prog-id=38 op=LOAD Jun 25 16:24:52.110000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:24:52.110000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:24:52.114308 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:24:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.129457 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:24:52.142796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:24:52.148011 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:24:52.151000 audit: BPF prog-id=39 op=LOAD Jun 25 16:24:52.153384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:24:52.156000 audit: BPF prog-id=40 op=LOAD Jun 25 16:24:52.163792 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:24:52.168655 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:24:52.180453 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.180823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:52.182000 audit[1429]: SYSTEM_BOOT pid=1429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.183126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:52.187998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:52.193135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:52.196263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:52.196937 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:52.197151 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.198494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:52.198739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:52.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.203005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:52.203213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:52.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.209885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:52.213006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.213784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:52.218164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:52.225354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:52.228230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:52.228510 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:52.228827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.231439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:52.233349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:52.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.239148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:52.239323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:52.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.243231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:24:52.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.248209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:52.254191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:52.254362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:52.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.258972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.259353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:52.265062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:52.279015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:24:52.284323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:52.287700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:52.287917 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:52.288102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:52.288294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:52.289515 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:24:52.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.298768 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:24:52.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.302560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:52.302746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:52.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.307981 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:24:52.308319 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:24:52.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.311773 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:52.311920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:52.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.315643 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:24:52.318481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:52.319688 systemd[1]: Finished ensure-sysext.service. Jun 25 16:24:52.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.383068 systemd-resolved[1422]: Positive Trust Anchors: Jun 25 16:24:52.383093 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:24:52.383131 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:24:52.414459 systemd-resolved[1422]: Using system hostname 'ci-3815.2.4-a-a46e2cd05c'. Jun 25 16:24:52.416723 augenrules[1447]: No rules Jun 25 16:24:52.415000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:24:52.415000 audit[1447]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc13b6bda0 a2=420 a3=0 items=0 ppid=1418 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.415000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:24:52.417478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:24:52.420999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:24:52.424227 systemd[1]: Reached target network.target - Network. Jun 25 16:24:52.426615 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:24:52.435925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:24:52.450152 systemd-timesyncd[1425]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jun 25 16:24:52.450296 systemd-timesyncd[1425]: Initial clock synchronization to Tue 2024-06-25 16:24:52.456628 UTC. Jun 25 16:24:52.678163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:24:52.685699 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:24:55.001815 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:24:55.015162 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:24:55.022871 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:24:55.036245 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:24:55.039492 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:24:55.042436 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:24:55.045688 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:24:55.048831 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:24:55.051866 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:24:55.055009 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:24:55.057887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:24:55.057932 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:24:55.060464 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:24:55.063561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:24:55.068138 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:24:55.079446 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:24:55.083543 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:55.084085 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:24:55.086955 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:24:55.089517 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:24:55.092295 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:24:55.092329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:24:55.099773 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:24:55.105104 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:24:55.110251 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:24:55.115142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:24:55.120314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:24:55.123427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:24:55.126294 jq[1460]: false Jun 25 16:24:55.145793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:55.150914 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:24:55.155372 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:24:55.160337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:24:55.165449 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:24:55.170354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:24:55.181821 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:24:55.184703 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:55.184804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:24:55.185440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:24:55.186702 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:24:55.190470 extend-filesystems[1461]: Found loop4 Jun 25 16:24:55.191209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:24:55.196100 extend-filesystems[1461]: Found loop5 Jun 25 16:24:55.199408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:24:55.199743 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:24:55.205044 dbus-daemon[1457]: [system] SELinux support is enabled Jun 25 16:24:55.205440 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:24:55.211450 extend-filesystems[1461]: Found loop6 Jun 25 16:24:55.213647 extend-filesystems[1461]: Found loop7 Jun 25 16:24:55.215464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:24:55.219052 jq[1473]: true Jun 25 16:24:55.215508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:24:55.219544 extend-filesystems[1461]: Found sda Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda1 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda2 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda3 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found usr Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda4 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda6 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda7 Jun 25 16:24:55.224954 extend-filesystems[1461]: Found sda9 Jun 25 16:24:55.219993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:24:55.251507 extend-filesystems[1461]: Checking size of /dev/sda9 Jun 25 16:24:55.220022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:24:55.225194 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:24:55.225356 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:24:55.278362 jq[1482]: true Jun 25 16:24:55.312891 update_engine[1472]: I0625 16:24:55.312808 1472 main.cc:92] Flatcar Update Engine starting Jun 25 16:24:55.316071 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:24:55.316294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:24:55.317156 extend-filesystems[1461]: Old size kept for /dev/sda9 Jun 25 16:24:55.339048 extend-filesystems[1461]: Found sr0 Jun 25 16:24:55.322431 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:24:55.346429 tar[1476]: linux-amd64/helm Jun 25 16:24:55.363983 update_engine[1472]: I0625 16:24:55.339841 1472 update_check_scheduler.cc:74] Next update check in 8m34s Jun 25 16:24:55.322679 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:24:55.328247 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:24:55.350869 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:24:55.377002 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:24:55.490469 coreos-metadata[1456]: Jun 25 16:24:55.490 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 16:24:55.634744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1521) Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.506 INFO Fetch successful Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.506 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.510 INFO Fetch successful Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.510 INFO Fetching http://168.63.129.16/machine/f102efe4-0a04-4705-b04e-f0c921a5f915/5981c358%2Db041%2D4900%2Da3b0%2D208d6ee6c1be.%5Fci%2D3815.2.4%2Da%2Da46e2cd05c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.512 INFO Fetch successful Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.512 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 16:24:55.634869 coreos-metadata[1456]: Jun 25 16:24:55.527 INFO Fetch successful Jun 25 16:24:55.559770 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:24:55.563504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:24:55.638634 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:24:55.639319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:24:55.643476 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:24:55.698308 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:24:55.702630 systemd-logind[1471]: New seat seat0. Jun 25 16:24:55.719569 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:24:56.080790 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:24:56.270544 containerd[1477]: time="2024-06-25T16:24:56.270443035Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:24:56.316909 containerd[1477]: time="2024-06-25T16:24:56.316848433Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:24:56.317137 containerd[1477]: time="2024-06-25T16:24:56.317119024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.319009 containerd[1477]: time="2024-06-25T16:24:56.318963548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:56.319155 containerd[1477]: time="2024-06-25T16:24:56.319139008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.319547 containerd[1477]: time="2024-06-25T16:24:56.319520337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:56.319667 containerd[1477]: time="2024-06-25T16:24:56.319650781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:24:56.319849 containerd[1477]: time="2024-06-25T16:24:56.319833142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.319991 containerd[1477]: time="2024-06-25T16:24:56.319966388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:56.320063 containerd[1477]: time="2024-06-25T16:24:56.320050516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.320200 containerd[1477]: time="2024-06-25T16:24:56.320185562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.320524 containerd[1477]: time="2024-06-25T16:24:56.320499168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.320644 containerd[1477]: time="2024-06-25T16:24:56.320625010Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:24:56.320733 containerd[1477]: time="2024-06-25T16:24:56.320720143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:56.320971 containerd[1477]: time="2024-06-25T16:24:56.320951721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:56.321050 containerd[1477]: time="2024-06-25T16:24:56.321037350Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:24:56.321193 containerd[1477]: time="2024-06-25T16:24:56.321169995Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:24:56.321268 containerd[1477]: time="2024-06-25T16:24:56.321255824Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451275906Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451328624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451349831Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451391445Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451410451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451428357Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:24:56.451664 containerd[1477]: time="2024-06-25T16:24:56.451444863Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452084679Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452150502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452171409Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452190115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452221026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452245834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452263940Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452280246Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452309556Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452328162Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452346068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452372177Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:24:56.452889 containerd[1477]: time="2024-06-25T16:24:56.452523628Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:24:56.453612 containerd[1477]: time="2024-06-25T16:24:56.453423832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:24:56.453612 containerd[1477]: time="2024-06-25T16:24:56.453474350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.453612 containerd[1477]: time="2024-06-25T16:24:56.453493556Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:24:56.453612 containerd[1477]: time="2024-06-25T16:24:56.453525867Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453596891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453801960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453818766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453833471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453849677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453872384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453888690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453904195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.453921101Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454062248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454083756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454102062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454118668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454135973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.455619 containerd[1477]: time="2024-06-25T16:24:56.454154380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.456159 containerd[1477]: time="2024-06-25T16:24:56.454171886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.456159 containerd[1477]: time="2024-06-25T16:24:56.454187391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:24:56.456233 containerd[1477]: time="2024-06-25T16:24:56.454519603Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:24:56.456233 containerd[1477]: time="2024-06-25T16:24:56.454609534Z" level=info msg="Connect containerd service" Jun 25 16:24:56.456233 containerd[1477]: time="2024-06-25T16:24:56.454649147Z" level=info msg="using legacy CRI server" Jun 25 16:24:56.456233 containerd[1477]: time="2024-06-25T16:24:56.454658350Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:24:56.456233 containerd[1477]: time="2024-06-25T16:24:56.454697663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:24:56.459009 containerd[1477]: time="2024-06-25T16:24:56.458401716Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:24:56.459738 containerd[1477]: time="2024-06-25T16:24:56.459686151Z" level=info msg="Start subscribing containerd event" Jun 25 16:24:56.459824 containerd[1477]: time="2024-06-25T16:24:56.459755374Z" level=info msg="Start recovering state" Jun 25 16:24:56.459870 containerd[1477]: time="2024-06-25T16:24:56.459844104Z" level=info msg="Start event monitor" Jun 25 16:24:56.459913 containerd[1477]: time="2024-06-25T16:24:56.459866512Z" level=info msg="Start snapshots syncer" Jun 25 16:24:56.459913 containerd[1477]: time="2024-06-25T16:24:56.459886919Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:24:56.459913 containerd[1477]: time="2024-06-25T16:24:56.459899523Z" level=info msg="Start streaming server" Jun 25 16:24:56.460194 containerd[1477]: time="2024-06-25T16:24:56.460173416Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:24:56.460283 containerd[1477]: time="2024-06-25T16:24:56.460266047Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:24:56.460343 containerd[1477]: time="2024-06-25T16:24:56.460331069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:24:56.460410 containerd[1477]: time="2024-06-25T16:24:56.460397091Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:24:56.480434 containerd[1477]: time="2024-06-25T16:24:56.480398657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:24:56.480659 containerd[1477]: time="2024-06-25T16:24:56.480643040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:24:56.480890 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:24:56.481320 containerd[1477]: time="2024-06-25T16:24:56.481303563Z" level=info msg="containerd successfully booted in 0.212076s" Jun 25 16:24:56.686177 tar[1476]: linux-amd64/LICENSE Jun 25 16:24:56.687858 tar[1476]: linux-amd64/README.md Jun 25 16:24:56.702055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:24:56.855875 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:24:56.884004 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:24:56.894061 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:24:56.898796 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 16:24:56.907923 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:24:56.908144 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:24:56.922645 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:24:56.928720 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 16:24:56.946508 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:24:56.953608 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:24:56.962123 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:24:56.965879 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:24:56.977834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:56.981881 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:24:56.987180 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:24:57.000397 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:24:57.000616 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:24:57.003920 systemd[1]: Startup finished in 767ms (firmware) + 21.556s (loader) + 1.090s (kernel) + 9.252s (initrd) + 15.158s (userspace) = 47.825s. Jun 25 16:24:57.341308 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:24:57.343670 login[1586]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:24:57.352942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:24:57.358071 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:24:57.362452 systemd-logind[1471]: New session 2 of user core. Jun 25 16:24:57.366990 systemd-logind[1471]: New session 1 of user core. Jun 25 16:24:57.374826 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:24:57.381963 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:24:57.386052 (systemd)[1594]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:57.583344 kubelet[1587]: E0625 16:24:57.583272 1587 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:24:57.585942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:24:57.586108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:24:57.624281 systemd[1594]: Queued start job for default target default.target. Jun 25 16:24:57.630029 systemd[1594]: Reached target paths.target - Paths. Jun 25 16:24:57.630055 systemd[1594]: Reached target sockets.target - Sockets. Jun 25 16:24:57.630071 systemd[1594]: Reached target timers.target - Timers. Jun 25 16:24:57.630084 systemd[1594]: Reached target basic.target - Basic System. Jun 25 16:24:57.630138 systemd[1594]: Reached target default.target - Main User Target. Jun 25 16:24:57.630175 systemd[1594]: Startup finished in 235ms. Jun 25 16:24:57.630215 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:24:57.631967 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:24:57.632810 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:24:58.164216 waagent[1582]: 2024-06-25T16:24:58.164110Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.164651Z INFO Daemon Daemon OS: flatcar 3815.2.4 Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.165618Z INFO Daemon Daemon Python: 3.11.6 Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.166328Z INFO Daemon Daemon Run daemon Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.167192Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.4' Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.168015Z INFO Daemon Daemon Using waagent for provisioning Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.169240Z INFO Daemon Daemon Activate resource disk Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.170029Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.174157Z INFO Daemon Daemon Found device: None Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.174804Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.175251Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.176604Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 16:24:58.201347 waagent[1582]: 2024-06-25T16:24:58.177696Z INFO Daemon Daemon Running default provisioning handler Jun 25 16:24:58.205374 waagent[1582]: 2024-06-25T16:24:58.205292Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jun 25 16:24:58.212326 waagent[1582]: 2024-06-25T16:24:58.212268Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 16:24:58.217653 waagent[1582]: 2024-06-25T16:24:58.217569Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 16:24:58.222331 waagent[1582]: 2024-06-25T16:24:58.217792Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 16:24:58.268710 waagent[1582]: 2024-06-25T16:24:58.268585Z INFO Daemon Daemon Successfully mounted dvd Jun 25 16:24:58.294876 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 16:24:58.298615 waagent[1582]: 2024-06-25T16:24:58.298524Z INFO Daemon Daemon Detect protocol endpoint Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.298908Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.299531Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.299952Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.300519Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.301292Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.311552Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.312382Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 16:24:58.314003 waagent[1582]: 2024-06-25T16:24:58.313104Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 16:24:58.543937 waagent[1582]: 2024-06-25T16:24:58.543786Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 16:24:58.547467 waagent[1582]: 2024-06-25T16:24:58.547340Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 16:24:58.551035 waagent[1582]: 2024-06-25T16:24:58.550983Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 16:24:58.556221 waagent[1582]: 2024-06-25T16:24:58.556178Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 16:24:58.572937 waagent[1582]: 2024-06-25T16:24:58.556781Z INFO Daemon Jun 25 16:24:58.572937 waagent[1582]: 2024-06-25T16:24:58.557909Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0ba48678-4054-44b1-a05f-540a58a039b9 eTag: 13619548817770075192 source: Fabric] Jun 25 16:24:58.572937 waagent[1582]: 2024-06-25T16:24:58.558608Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 16:24:58.572937 waagent[1582]: 2024-06-25T16:24:58.559288Z INFO Daemon Jun 25 16:24:58.572937 waagent[1582]: 2024-06-25T16:24:58.560190Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 16:24:58.575737 waagent[1582]: 2024-06-25T16:24:58.575697Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 16:24:58.665988 waagent[1582]: 2024-06-25T16:24:58.665900Z INFO Daemon Downloaded certificate {'thumbprint': 'B7EAED41759FFB466475D5CACCFCB16FB51918E8', 'hasPrivateKey': False} Jun 25 16:24:58.678003 waagent[1582]: 2024-06-25T16:24:58.666546Z INFO Daemon Downloaded certificate {'thumbprint': '871633B8438240DAB957FB931381D67CB9EC6A35', 'hasPrivateKey': True} Jun 25 16:24:58.678003 waagent[1582]: 2024-06-25T16:24:58.667285Z INFO Daemon Fetch goal state completed Jun 25 16:24:58.683975 waagent[1582]: 2024-06-25T16:24:58.683926Z INFO Daemon Daemon Starting provisioning Jun 25 16:24:58.691742 waagent[1582]: 2024-06-25T16:24:58.684288Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 16:24:58.691742 waagent[1582]: 2024-06-25T16:24:58.685328Z INFO Daemon Daemon Set hostname [ci-3815.2.4-a-a46e2cd05c] Jun 25 16:24:58.798915 waagent[1582]: 2024-06-25T16:24:58.798820Z INFO Daemon Daemon Publish hostname [ci-3815.2.4-a-a46e2cd05c] Jun 25 16:24:58.807657 waagent[1582]: 2024-06-25T16:24:58.799373Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 16:24:58.807657 waagent[1582]: 2024-06-25T16:24:58.800543Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 16:24:58.851067 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:58.851078 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:24:58.851130 systemd-networkd[1234]: eth0: DHCP lease lost Jun 25 16:24:58.852484 waagent[1582]: 2024-06-25T16:24:58.852391Z INFO Daemon Daemon Create user account if not exists Jun 25 16:24:58.867802 waagent[1582]: 2024-06-25T16:24:58.852819Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 16:24:58.867802 waagent[1582]: 2024-06-25T16:24:58.854437Z INFO Daemon Daemon Configure sudoer Jun 25 16:24:58.867802 waagent[1582]: 2024-06-25T16:24:58.855543Z INFO Daemon Daemon Configure sshd Jun 25 16:24:58.867802 waagent[1582]: 2024-06-25T16:24:58.856297Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 16:24:58.867802 waagent[1582]: 2024-06-25T16:24:58.856919Z INFO Daemon Daemon Deploy ssh public key. Jun 25 16:24:58.868751 systemd-networkd[1234]: eth0: DHCPv6 lease lost Jun 25 16:24:58.893670 systemd-networkd[1234]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:25:00.154289 waagent[1582]: 2024-06-25T16:25:00.154226Z INFO Daemon Daemon Provisioning complete Jun 25 16:25:00.167348 waagent[1582]: 2024-06-25T16:25:00.167274Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 16:25:00.175096 waagent[1582]: 2024-06-25T16:25:00.167673Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 16:25:00.175096 waagent[1582]: 2024-06-25T16:25:00.168155Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 16:25:00.296844 waagent[1641]: 2024-06-25T16:25:00.296740Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 16:25:00.297223 waagent[1641]: 2024-06-25T16:25:00.296912Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.4 Jun 25 16:25:00.297223 waagent[1641]: 2024-06-25T16:25:00.297004Z INFO ExtHandler ExtHandler Python: 3.11.6 Jun 25 16:25:00.460891 waagent[1641]: 2024-06-25T16:25:00.460733Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 16:25:00.461084 waagent[1641]: 2024-06-25T16:25:00.461033Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:25:00.461190 waagent[1641]: 2024-06-25T16:25:00.461145Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:25:00.468032 waagent[1641]: 2024-06-25T16:25:00.467961Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 16:25:01.346659 waagent[1641]: 2024-06-25T16:25:01.346564Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 16:25:01.347348 waagent[1641]: 2024-06-25T16:25:01.347291Z INFO ExtHandler Jun 25 16:25:01.347458 waagent[1641]: 2024-06-25T16:25:01.347398Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: afd0f337-abd6-458f-b9e6-93444161e221 eTag: 13619548817770075192 source: Fabric] Jun 25 16:25:01.347794 waagent[1641]: 2024-06-25T16:25:01.347748Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 16:25:01.497893 waagent[1641]: 2024-06-25T16:25:01.497735Z INFO ExtHandler Jun 25 16:25:01.498319 waagent[1641]: 2024-06-25T16:25:01.498250Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 16:25:01.505835 waagent[1641]: 2024-06-25T16:25:01.505787Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 16:25:02.171044 waagent[1641]: 2024-06-25T16:25:02.170945Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B7EAED41759FFB466475D5CACCFCB16FB51918E8', 'hasPrivateKey': False} Jun 25 16:25:02.171720 waagent[1641]: 2024-06-25T16:25:02.171658Z INFO ExtHandler Downloaded certificate {'thumbprint': '871633B8438240DAB957FB931381D67CB9EC6A35', 'hasPrivateKey': True} Jun 25 16:25:02.172331 waagent[1641]: 2024-06-25T16:25:02.172274Z INFO ExtHandler Fetch goal state completed Jun 25 16:25:02.189146 waagent[1641]: 2024-06-25T16:25:02.189066Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1641 Jun 25 16:25:02.189330 waagent[1641]: 2024-06-25T16:25:02.189278Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 16:25:02.191055 waagent[1641]: 2024-06-25T16:25:02.190997Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.4', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 16:25:02.191671 waagent[1641]: 2024-06-25T16:25:02.191621Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 16:25:02.223063 waagent[1641]: 2024-06-25T16:25:02.223017Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 16:25:02.223291 waagent[1641]: 2024-06-25T16:25:02.223242Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 16:25:02.229870 waagent[1641]: 2024-06-25T16:25:02.229832Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 16:25:02.237266 systemd[1]: Reloading. Jun 25 16:25:02.424736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:25:02.504253 waagent[1641]: 2024-06-25T16:25:02.504156Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 16:25:02.510531 systemd[1]: Reloading. Jun 25 16:25:02.692377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:25:02.770572 waagent[1641]: 2024-06-25T16:25:02.770463Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 16:25:02.770759 waagent[1641]: 2024-06-25T16:25:02.770703Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 16:25:02.981412 waagent[1641]: 2024-06-25T16:25:02.981266Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 16:25:02.982063 waagent[1641]: 2024-06-25T16:25:02.981991Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 16:25:02.982983 waagent[1641]: 2024-06-25T16:25:02.982918Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 16:25:02.983546 waagent[1641]: 2024-06-25T16:25:02.983479Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 16:25:02.983711 waagent[1641]: 2024-06-25T16:25:02.983655Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:25:02.983847 waagent[1641]: 2024-06-25T16:25:02.983795Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:25:02.984076 waagent[1641]: 2024-06-25T16:25:02.984023Z INFO EnvHandler ExtHandler Configure routes Jun 25 16:25:02.984202 waagent[1641]: 2024-06-25T16:25:02.984152Z INFO EnvHandler ExtHandler Gateway:None Jun 25 16:25:02.984319 waagent[1641]: 2024-06-25T16:25:02.984269Z INFO EnvHandler ExtHandler Routes:None Jun 25 16:25:02.985077 waagent[1641]: 2024-06-25T16:25:02.985022Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 16:25:02.985309 waagent[1641]: 2024-06-25T16:25:02.985258Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:25:02.985564 waagent[1641]: 2024-06-25T16:25:02.985517Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 16:25:02.986381 waagent[1641]: 2024-06-25T16:25:02.986322Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:25:02.986903 waagent[1641]: 2024-06-25T16:25:02.986834Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 16:25:02.987319 waagent[1641]: 2024-06-25T16:25:02.987253Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 16:25:02.987319 waagent[1641]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 16:25:02.987319 waagent[1641]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 16:25:02.987319 waagent[1641]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 16:25:02.987319 waagent[1641]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:25:02.987319 waagent[1641]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:25:02.987319 waagent[1641]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:25:02.988882 waagent[1641]: 2024-06-25T16:25:02.988818Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 16:25:02.989458 waagent[1641]: 2024-06-25T16:25:02.988620Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 16:25:02.989645 waagent[1641]: 2024-06-25T16:25:02.989558Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 16:25:03.039931 waagent[1641]: 2024-06-25T16:25:03.039848Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 16:25:03.039931 waagent[1641]: Executing ['ip', '-a', '-o', 'link']: Jun 25 16:25:03.039931 waagent[1641]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 16:25:03.039931 waagent[1641]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b1:4d:de brd ff:ff:ff:ff:ff:ff Jun 25 16:25:03.039931 waagent[1641]: 3: enP9283s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b1:4d:de brd ff:ff:ff:ff:ff:ff\ altname enP9283p0s2 Jun 25 16:25:03.039931 waagent[1641]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 16:25:03.039931 waagent[1641]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 16:25:03.039931 waagent[1641]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 16:25:03.039931 waagent[1641]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 16:25:03.039931 waagent[1641]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jun 25 16:25:03.039931 waagent[1641]: 2: eth0 inet6 fe80::20d:3aff:feb1:4dde/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 16:25:03.065260 waagent[1641]: 2024-06-25T16:25:03.065194Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 16:25:03.065260 waagent[1641]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.065260 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.065260 waagent[1641]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.065260 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.065260 waagent[1641]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.065260 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.065260 waagent[1641]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 16:25:03.065260 waagent[1641]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 16:25:03.065260 waagent[1641]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 16:25:03.068683 waagent[1641]: 2024-06-25T16:25:03.068558Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 16:25:03.068683 waagent[1641]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.068683 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.068683 waagent[1641]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.068683 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.068683 waagent[1641]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:25:03.068683 waagent[1641]: pkts bytes target prot opt in out source destination Jun 25 16:25:03.068683 waagent[1641]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 16:25:03.068683 waagent[1641]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 16:25:03.068683 waagent[1641]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 16:25:03.069120 waagent[1641]: 2024-06-25T16:25:03.068957Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 16:25:03.127612 waagent[1641]: 2024-06-25T16:25:03.127513Z INFO ExtHandler ExtHandler Jun 25 16:25:03.127787 waagent[1641]: 2024-06-25T16:25:03.127721Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a5066afd-9157-41d4-8631-6d76851dacca correlation 10cbea71-2f20-4940-a7e5-6882286d4c94 created: 2024-06-25T16:23:59.087914Z] Jun 25 16:25:03.128828 waagent[1641]: 2024-06-25T16:25:03.128774Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 16:25:03.129693 waagent[1641]: 2024-06-25T16:25:03.129644Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jun 25 16:25:04.332507 waagent[1641]: 2024-06-25T16:25:04.332437Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 586C5C0B-1904-493D-8D78-64CFF052F1DB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 16:25:07.836939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:25:07.837269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:07.846067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:25:07.939949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:08.483581 kubelet[1844]: E0625 16:25:08.483525 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:25:08.486759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:25:08.486926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:25:18.738177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:25:18.738518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:18.745182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:25:18.834942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:19.362410 kubelet[1855]: E0625 16:25:19.362350 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:25:19.364160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:25:19.364330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:25:29.382529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:25:29.382856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:29.390070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:25:29.481153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:30.032233 kubelet[1865]: E0625 16:25:30.032175 1865 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:25:30.033864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:25:30.034032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:25:36.056806 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 16:25:36.106804 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:25:36.108720 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:42166.service - OpenSSH per-connection server daemon (10.200.16.10:42166). Jun 25 16:25:36.784792 sshd[1873]: Accepted publickey for core from 10.200.16.10 port 42166 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:36.786395 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:36.791494 systemd-logind[1471]: New session 3 of user core. Jun 25 16:25:36.797797 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:25:37.348800 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:42174.service - OpenSSH per-connection server daemon (10.200.16.10:42174). Jun 25 16:25:37.983006 sshd[1878]: Accepted publickey for core from 10.200.16.10 port 42174 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:37.984490 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:37.988943 systemd-logind[1471]: New session 4 of user core. Jun 25 16:25:37.995788 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:25:38.437915 sshd[1878]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:38.441221 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:42174.service: Deactivated successfully. Jun 25 16:25:38.442629 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:25:38.442647 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:25:38.443949 systemd-logind[1471]: Removed session 4. Jun 25 16:25:38.555863 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:42186.service - OpenSSH per-connection server daemon (10.200.16.10:42186). Jun 25 16:25:39.197553 sshd[1884]: Accepted publickey for core from 10.200.16.10 port 42186 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:39.199002 sshd[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:39.203466 systemd-logind[1471]: New session 5 of user core. Jun 25 16:25:39.206805 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:25:39.651864 sshd[1884]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:39.655181 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:42186.service: Deactivated successfully. Jun 25 16:25:39.656239 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:25:39.657075 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:25:39.657925 systemd-logind[1471]: Removed session 5. Jun 25 16:25:39.767035 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:42190.service - OpenSSH per-connection server daemon (10.200.16.10:42190). Jun 25 16:25:40.132394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 16:25:40.132667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:40.140182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:25:40.231544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:40.408295 sshd[1890]: Accepted publickey for core from 10.200.16.10 port 42190 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:40.409975 sshd[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:40.414688 systemd-logind[1471]: New session 6 of user core. Jun 25 16:25:40.419789 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:25:40.502836 update_engine[1472]: I0625 16:25:40.502780 1472 update_attempter.cc:509] Updating boot flags... Jun 25 16:25:40.770027 kubelet[1896]: E0625 16:25:40.769494 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:25:40.771363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:25:40.771548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:25:40.868143 sshd[1890]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:40.871511 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:42190.service: Deactivated successfully. Jun 25 16:25:40.872521 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:25:40.873308 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:25:40.874204 systemd-logind[1471]: Removed session 6. Jun 25 16:25:40.981709 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:42200.service - OpenSSH per-connection server daemon (10.200.16.10:42200). Jun 25 16:25:41.626627 sshd[1910]: Accepted publickey for core from 10.200.16.10 port 42200 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:41.627292 sshd[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:41.637412 systemd-logind[1471]: New session 7 of user core. Jun 25 16:25:41.640783 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:25:41.656620 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1920) Jun 25 16:25:41.838616 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1919) Jun 25 16:25:42.078917 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:25:42.079274 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:25:42.103695 sudo[1976]: pam_unix(sudo:session): session closed for user root Jun 25 16:25:42.207756 sshd[1910]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:42.211762 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:42200.service: Deactivated successfully. Jun 25 16:25:42.212838 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:25:42.213682 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:25:42.214709 systemd-logind[1471]: Removed session 7. Jun 25 16:25:42.323911 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:42208.service - OpenSSH per-connection server daemon (10.200.16.10:42208). Jun 25 16:25:42.971199 sshd[1980]: Accepted publickey for core from 10.200.16.10 port 42208 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:42.972923 sshd[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:42.977893 systemd-logind[1471]: New session 8 of user core. Jun 25 16:25:42.984833 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:25:43.340381 sudo[1984]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:25:43.340745 sudo[1984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:25:43.344007 sudo[1984]: pam_unix(sudo:session): session closed for user root Jun 25 16:25:43.349029 sudo[1983]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:25:43.349351 sudo[1983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:25:43.364142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:25:43.364000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:25:43.366246 auditctl[1987]: No rules Jun 25 16:25:43.371423 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 16:25:43.371506 kernel: audit: type=1305 audit(1719332743.364:211): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:25:43.371532 kernel: audit: type=1300 audit(1719332743.364:211): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe42a4c1e0 a2=420 a3=0 items=0 ppid=1 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.364000 audit[1987]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe42a4c1e0 a2=420 a3=0 items=0 ppid=1 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.366714 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:25:43.366882 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:25:43.368847 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:25:43.384604 kernel: audit: type=1327 audit(1719332743.364:211): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:25:43.364000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:25:43.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.392623 kernel: audit: type=1131 audit(1719332743.365:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.395942 augenrules[2004]: No rules Jun 25 16:25:43.396577 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:25:43.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.398818 sudo[1983]: pam_unix(sudo:session): session closed for user root Jun 25 16:25:43.397000 audit[1983]: USER_END pid=1983 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.412283 kernel: audit: type=1130 audit(1719332743.395:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.412353 kernel: audit: type=1106 audit(1719332743.397:214): pid=1983 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.412382 kernel: audit: type=1104 audit(1719332743.397:215): pid=1983 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.397000 audit[1983]: CRED_DISP pid=1983 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.503354 sshd[1980]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:43.503000 audit[1980]: USER_END pid=1980 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:43.506956 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:42208.service: Deactivated successfully. Jun 25 16:25:43.507814 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:25:43.509477 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:25:43.510297 systemd-logind[1471]: Removed session 8. Jun 25 16:25:43.503000 audit[1980]: CRED_DISP pid=1980 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:43.523287 kernel: audit: type=1106 audit(1719332743.503:216): pid=1980 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:43.523363 kernel: audit: type=1104 audit(1719332743.503:217): pid=1980 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:43.523393 kernel: audit: type=1131 audit(1719332743.505:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.4:22-10.200.16.10:42208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.4:22-10.200.16.10:42208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.617779 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:42224.service - OpenSSH per-connection server daemon (10.200.16.10:42224). Jun 25 16:25:43.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.4:22-10.200.16.10:42224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:44.258000 audit[2010]: USER_ACCT pid=2010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:44.260855 sshd[2010]: Accepted publickey for core from 10.200.16.10 port 42224 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:25:44.260000 audit[2010]: CRED_ACQ pid=2010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:44.260000 audit[2010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce1a7fe10 a2=3 a3=7f0e3d62b480 items=0 ppid=1 pid=2010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:44.260000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:44.262495 sshd[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:44.266877 systemd-logind[1471]: New session 9 of user core. Jun 25 16:25:44.276821 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:25:44.279000 audit[2010]: USER_START pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:44.281000 audit[2012]: CRED_ACQ pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:25:44.613000 audit[2013]: USER_ACCT pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:44.613000 audit[2013]: CRED_REFR pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:44.615216 sudo[2013]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:25:44.616044 sudo[2013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:25:44.616000 audit[2013]: USER_START pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:45.211202 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:25:46.272201 dockerd[2022]: time="2024-06-25T16:25:46.272135790Z" level=info msg="Starting up" Jun 25 16:25:46.353961 dockerd[2022]: time="2024-06-25T16:25:46.353906088Z" level=info msg="Loading containers: start." Jun 25 16:25:46.413000 audit[2051]: NETFILTER_CFG table=nat:6 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.413000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd9508c300 a2=0 a3=7fd89ecdae90 items=0 ppid=2022 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.413000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:25:46.415000 audit[2053]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.415000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde78ada80 a2=0 a3=7f37c686ae90 items=0 ppid=2022 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.415000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:25:46.417000 audit[2055]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.417000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc91b6270 a2=0 a3=7f544545fe90 items=0 ppid=2022 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.417000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:25:46.419000 audit[2057]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.419000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff9d90fa20 a2=0 a3=7f22775c0e90 items=0 ppid=2022 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.419000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:25:46.421000 audit[2059]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.421000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffddd70dc10 a2=0 a3=7f5f3db28e90 items=0 ppid=2022 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.421000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:25:46.423000 audit[2061]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.423000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffd462ce60 a2=0 a3=7f140da3be90 items=0 ppid=2022 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.423000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:25:46.445000 audit[2063]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.445000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2e0ca910 a2=0 a3=7fd83156ae90 items=0 ppid=2022 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:25:46.447000 audit[2065]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.447000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc1b8c6130 a2=0 a3=7f9bb780be90 items=0 ppid=2022 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.447000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:25:46.449000 audit[2067]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.449000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe2b7d6150 a2=0 a3=7fb1128ebe90 items=0 ppid=2022 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.449000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:25:46.466000 audit[2071]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_unregister_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.466000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdf711da10 a2=0 a3=7f2829c9be90 items=0 ppid=2022 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:25:46.467000 audit[2072]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.467000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe4db4c140 a2=0 a3=7fcd687b1e90 items=0 ppid=2022 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.467000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:25:46.504612 kernel: Initializing XFRM netlink socket Jun 25 16:25:46.556000 audit[2080]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.556000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffdfb9d0fb0 a2=0 a3=7f552a6cae90 items=0 ppid=2022 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.556000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:25:46.566000 audit[2083]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.566000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd829dbae0 a2=0 a3=7f533ecfee90 items=0 ppid=2022 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.566000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:25:46.570000 audit[2087]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.570000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffce4c6fe50 a2=0 a3=7f0da1754e90 items=0 ppid=2022 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.570000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:25:46.572000 audit[2089]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.572000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc3d9f6520 a2=0 a3=7ffbb2e30e90 items=0 ppid=2022 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:25:46.575000 audit[2091]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.575000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fffebcfc150 a2=0 a3=7ff48a5e0e90 items=0 ppid=2022 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.575000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:25:46.577000 audit[2093]: NETFILTER_CFG table=nat:22 family=2 entries=2 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.577000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc3eecd4c0 a2=0 a3=7fdf82e07e90 items=0 ppid=2022 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.577000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:25:46.579000 audit[2095]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.579000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff55571b90 a2=0 a3=7f112f714e90 items=0 ppid=2022 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.579000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:25:46.581000 audit[2097]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.581000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffda900e690 a2=0 a3=7fa749882e90 items=0 ppid=2022 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.581000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:25:46.584000 audit[2099]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.584000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fffd6f0aac0 a2=0 a3=7fea75cdfe90 items=0 ppid=2022 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.584000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:25:46.586000 audit[2101]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.586000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe57cc7090 a2=0 a3=7f4ee1be5e90 items=0 ppid=2022 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.586000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:25:46.588000 audit[2103]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.588000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc5a30e6b0 a2=0 a3=7f0e158e5e90 items=0 ppid=2022 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.588000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:25:46.590499 systemd-networkd[1234]: docker0: Link UP Jun 25 16:25:46.709000 audit[2107]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_unregister_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.709000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc540fa6e0 a2=0 a3=7fb93f20ce90 items=0 ppid=2022 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.709000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:25:46.710000 audit[2108]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:46.710000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd87c4dd30 a2=0 a3=7f2791f4ee90 items=0 ppid=2022 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.710000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:25:46.712609 dockerd[2022]: time="2024-06-25T16:25:46.712561104Z" level=info msg="Loading containers: done." Jun 25 16:25:46.960806 dockerd[2022]: time="2024-06-25T16:25:46.960735236Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:25:46.961054 dockerd[2022]: time="2024-06-25T16:25:46.961029440Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:25:46.961221 dockerd[2022]: time="2024-06-25T16:25:46.961194342Z" level=info msg="Daemon has completed initialization" Jun 25 16:25:47.026255 dockerd[2022]: time="2024-06-25T16:25:47.026184694Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:25:47.028066 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:25:47.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:48.669574 containerd[1477]: time="2024-06-25T16:25:48.669521954Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:25:49.480099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664052795.mount: Deactivated successfully. Jun 25 16:25:50.882455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 16:25:50.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:50.885234 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 16:25:50.885288 kernel: audit: type=1130 audit(1719332750.881:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:50.882833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:50.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:50.901045 kernel: audit: type=1131 audit(1719332750.881:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:50.908156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:25:51.012338 kernel: audit: type=1130 audit(1719332751.001:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:51.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:51.002296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:25:52.698740 kernel: audit: type=1131 audit(1719332751.555:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:25:51.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:25:52.699020 kubelet[2206]: E0625 16:25:51.555118 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:25:51.556733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:25:51.556861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:25:53.809396 containerd[1477]: time="2024-06-25T16:25:53.809337756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:53.811295 containerd[1477]: time="2024-06-25T16:25:53.811239972Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 16:25:53.815548 containerd[1477]: time="2024-06-25T16:25:53.815514809Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:53.819440 containerd[1477]: time="2024-06-25T16:25:53.819403242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:53.826280 containerd[1477]: time="2024-06-25T16:25:53.826245000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:53.827291 containerd[1477]: time="2024-06-25T16:25:53.827245409Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 5.157669754s" Jun 25 16:25:53.827388 containerd[1477]: time="2024-06-25T16:25:53.827299209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:25:53.852924 containerd[1477]: time="2024-06-25T16:25:53.852874128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:25:56.627395 containerd[1477]: time="2024-06-25T16:25:56.627330336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:56.632087 containerd[1477]: time="2024-06-25T16:25:56.632018769Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 16:25:56.634678 containerd[1477]: time="2024-06-25T16:25:56.634636687Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:56.639654 containerd[1477]: time="2024-06-25T16:25:56.639617522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:56.644394 containerd[1477]: time="2024-06-25T16:25:56.644346956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:56.645548 containerd[1477]: time="2024-06-25T16:25:56.645506064Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.792579736s" Jun 25 16:25:56.645711 containerd[1477]: time="2024-06-25T16:25:56.645683965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:25:56.667539 containerd[1477]: time="2024-06-25T16:25:56.667499393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:25:58.466770 containerd[1477]: time="2024-06-25T16:25:58.466710453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:58.468843 containerd[1477]: time="2024-06-25T16:25:58.468785895Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jun 25 16:25:58.474722 containerd[1477]: time="2024-06-25T16:25:58.474685364Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:58.479400 containerd[1477]: time="2024-06-25T16:25:58.479355284Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:58.484858 containerd[1477]: time="2024-06-25T16:25:58.484826906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:58.485704 containerd[1477]: time="2024-06-25T16:25:58.485662302Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.818117811s" Jun 25 16:25:58.485822 containerd[1477]: time="2024-06-25T16:25:58.485711796Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:25:58.507182 containerd[1477]: time="2024-06-25T16:25:58.507136238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:25:59.954375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439583870.mount: Deactivated successfully. Jun 25 16:26:00.403771 containerd[1477]: time="2024-06-25T16:26:00.403709221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:00.406944 containerd[1477]: time="2024-06-25T16:26:00.406872950Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 16:26:00.410918 containerd[1477]: time="2024-06-25T16:26:00.410874981Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:00.415337 containerd[1477]: time="2024-06-25T16:26:00.415301462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:00.419167 containerd[1477]: time="2024-06-25T16:26:00.419128613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:00.419886 containerd[1477]: time="2024-06-25T16:26:00.419839329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.912651598s" Jun 25 16:26:00.419982 containerd[1477]: time="2024-06-25T16:26:00.419892423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:26:00.442252 containerd[1477]: time="2024-06-25T16:26:00.442206706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:26:01.030555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415359587.mount: Deactivated successfully. Jun 25 16:26:01.058107 containerd[1477]: time="2024-06-25T16:26:01.058057762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.061672 containerd[1477]: time="2024-06-25T16:26:01.061610157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 16:26:01.065085 containerd[1477]: time="2024-06-25T16:26:01.065051764Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.077730 containerd[1477]: time="2024-06-25T16:26:01.077683923Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.082164 containerd[1477]: time="2024-06-25T16:26:01.082122717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.082870 containerd[1477]: time="2024-06-25T16:26:01.082825537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 640.575836ms" Jun 25 16:26:01.083719 containerd[1477]: time="2024-06-25T16:26:01.082875431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:26:01.104478 containerd[1477]: time="2024-06-25T16:26:01.104433872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:26:01.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.632382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 16:26:01.632682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:01.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.648374 kernel: audit: type=1130 audit(1719332761.632:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.648452 kernel: audit: type=1131 audit(1719332761.632:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.650113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:02.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:02.297452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:02.307613 kernel: audit: type=1130 audit(1719332762.297:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:02.575864 kubelet[2265]: E0625 16:26:02.575708 2265 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:02.577672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:02.577840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:02.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:02.587662 kernel: audit: type=1131 audit(1719332762.577:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:03.510237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355981956.mount: Deactivated successfully. Jun 25 16:26:07.810438 containerd[1477]: time="2024-06-25T16:26:07.810378008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:07.817179 containerd[1477]: time="2024-06-25T16:26:07.817130754Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 16:26:07.822055 containerd[1477]: time="2024-06-25T16:26:07.822017882Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:07.825579 containerd[1477]: time="2024-06-25T16:26:07.825547640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:07.829545 containerd[1477]: time="2024-06-25T16:26:07.829517656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:07.830636 containerd[1477]: time="2024-06-25T16:26:07.830586053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.726112285s" Jun 25 16:26:07.830776 containerd[1477]: time="2024-06-25T16:26:07.830752437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:26:07.852641 containerd[1477]: time="2024-06-25T16:26:07.852585125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:26:08.518550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126834036.mount: Deactivated successfully. Jun 25 16:26:09.406826 containerd[1477]: time="2024-06-25T16:26:09.406769722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.409368 containerd[1477]: time="2024-06-25T16:26:09.409307590Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jun 25 16:26:09.412466 containerd[1477]: time="2024-06-25T16:26:09.412430103Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.416763 containerd[1477]: time="2024-06-25T16:26:09.416731209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.420155 containerd[1477]: time="2024-06-25T16:26:09.420122998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.420879 containerd[1477]: time="2024-06-25T16:26:09.420835133Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.568168116s" Jun 25 16:26:09.420959 containerd[1477]: time="2024-06-25T16:26:09.420886628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:26:11.908176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:11.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.924564 kernel: audit: type=1130 audit(1719332771.907:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.924685 kernel: audit: type=1131 audit(1719332771.907:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.940779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:11.963487 systemd[1]: Reloading. Jun 25 16:26:12.148796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:12.225354 kernel: audit: type=1334 audit(1719332772.217:263): prog-id=72 op=LOAD Jun 25 16:26:12.225487 kernel: audit: type=1334 audit(1719332772.217:264): prog-id=58 op=UNLOAD Jun 25 16:26:12.217000 audit: BPF prog-id=72 op=LOAD Jun 25 16:26:12.217000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:26:12.230360 kernel: audit: type=1334 audit(1719332772.219:265): prog-id=73 op=LOAD Jun 25 16:26:12.219000 audit: BPF prog-id=73 op=LOAD Jun 25 16:26:12.219000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:26:12.220000 audit: BPF prog-id=74 op=LOAD Jun 25 16:26:12.241424 kernel: audit: type=1334 audit(1719332772.219:266): prog-id=59 op=UNLOAD Jun 25 16:26:12.241500 kernel: audit: type=1334 audit(1719332772.220:267): prog-id=74 op=LOAD Jun 25 16:26:12.244896 kernel: audit: type=1334 audit(1719332772.220:268): prog-id=60 op=UNLOAD Jun 25 16:26:12.220000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:26:12.220000 audit: BPF prog-id=75 op=LOAD Jun 25 16:26:12.247870 kernel: audit: type=1334 audit(1719332772.220:269): prog-id=75 op=LOAD Jun 25 16:26:12.220000 audit: BPF prog-id=76 op=LOAD Jun 25 16:26:12.250648 kernel: audit: type=1334 audit(1719332772.220:270): prog-id=76 op=LOAD Jun 25 16:26:12.220000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:26:12.220000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:26:12.221000 audit: BPF prog-id=77 op=LOAD Jun 25 16:26:12.221000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:26:12.222000 audit: BPF prog-id=78 op=LOAD Jun 25 16:26:12.222000 audit: BPF prog-id=79 op=LOAD Jun 25 16:26:12.222000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:26:12.222000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:26:12.224000 audit: BPF prog-id=80 op=LOAD Jun 25 16:26:12.224000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:26:12.224000 audit: BPF prog-id=81 op=LOAD Jun 25 16:26:12.224000 audit: BPF prog-id=82 op=LOAD Jun 25 16:26:12.224000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:26:12.224000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:26:12.226000 audit: BPF prog-id=83 op=LOAD Jun 25 16:26:12.226000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:26:12.226000 audit: BPF prog-id=84 op=LOAD Jun 25 16:26:12.226000 audit: BPF prog-id=85 op=LOAD Jun 25 16:26:12.226000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:26:12.226000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:26:12.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:12.593649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:26:12.593782 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:26:12.594111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:12.602282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:16.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.235334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:16.959779 kubelet[2473]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:16.960233 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:26:16.960302 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:16.960478 kubelet[2473]: I0625 16:26:16.960429 2473 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:26:17.353654 kubelet[2473]: I0625 16:26:17.353305 2473 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:26:17.353654 kubelet[2473]: I0625 16:26:17.353335 2473 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:26:17.353850 kubelet[2473]: I0625 16:26:17.353763 2473 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:26:17.369976 kubelet[2473]: E0625 16:26:17.369928 2473 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.370227 kubelet[2473]: I0625 16:26:17.370210 2473 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:26:17.378883 kubelet[2473]: I0625 16:26:17.378848 2473 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:26:17.380822 kubelet[2473]: I0625 16:26:17.380794 2473 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:26:17.381025 kubelet[2473]: I0625 16:26:17.381003 2473 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:26:17.381178 kubelet[2473]: I0625 16:26:17.381034 2473 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:26:17.381178 kubelet[2473]: I0625 16:26:17.381048 2473 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:26:17.381733 kubelet[2473]: I0625 16:26:17.381713 2473 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:17.382925 kubelet[2473]: I0625 16:26:17.382906 2473 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:26:17.383015 kubelet[2473]: I0625 16:26:17.382931 2473 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:26:17.383015 kubelet[2473]: I0625 16:26:17.382961 2473 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:26:17.383015 kubelet[2473]: I0625 16:26:17.382980 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:26:17.384799 kubelet[2473]: W0625 16:26:17.384644 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.384799 kubelet[2473]: E0625 16:26:17.384700 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.384799 kubelet[2473]: W0625 16:26:17.384772 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-a46e2cd05c&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.385016 kubelet[2473]: E0625 16:26:17.384813 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-a46e2cd05c&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.385016 kubelet[2473]: I0625 16:26:17.384888 2473 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:26:17.387668 kubelet[2473]: W0625 16:26:17.387641 2473 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:26:17.388516 kubelet[2473]: I0625 16:26:17.388494 2473 server.go:1232] "Started kubelet" Jun 25 16:26:17.395077 kubelet[2473]: E0625 16:26:17.394967 2473 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-a46e2cd05c.17dc4c0d36ecf258", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-a46e2cd05c", UID:"ci-3815.2.4-a-a46e2cd05c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-a46e2cd05c"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 26, 17, 388470872, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 26, 17, 388470872, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-a46e2cd05c"}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:26:17.395231 kubelet[2473]: I0625 16:26:17.395109 2473 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:26:17.396024 kubelet[2473]: I0625 16:26:17.396001 2473 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:26:17.396443 kubelet[2473]: I0625 16:26:17.396426 2473 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:26:17.397012 kubelet[2473]: I0625 16:26:17.396991 2473 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:26:17.400050 kubelet[2473]: E0625 16:26:17.400032 2473 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:26:17.400168 kubelet[2473]: E0625 16:26:17.400157 2473 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:26:17.400931 kubelet[2473]: I0625 16:26:17.400907 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:26:17.414493 kernel: kauditd_printk_skb: 22 callbacks suppressed Jun 25 16:26:17.414743 kernel: audit: type=1325 audit(1719332777.403:293): table=mangle:30 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.403000 audit[2483]: NETFILTER_CFG table=mangle:30 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.414884 kubelet[2473]: E0625 16:26:17.405989 2473 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3815.2.4-a-a46e2cd05c\" not found" Jun 25 16:26:17.414884 kubelet[2473]: I0625 16:26:17.406024 2473 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:26:17.414884 kubelet[2473]: I0625 16:26:17.406101 2473 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:26:17.414884 kubelet[2473]: I0625 16:26:17.406154 2473 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:26:17.414884 kubelet[2473]: W0625 16:26:17.406452 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.414884 kubelet[2473]: E0625 16:26:17.406489 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.414884 kubelet[2473]: E0625 16:26:17.406694 2473 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Jun 25 16:26:17.403000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffefd9fe640 a2=0 a3=7fa4274a4e90 items=0 ppid=2473 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.429607 kernel: audit: type=1300 audit(1719332777.403:293): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffefd9fe640 a2=0 a3=7fa4274a4e90 items=0 ppid=2473 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.429695 kernel: audit: type=1327 audit(1719332777.403:293): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:26:17.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:26:17.415000 audit[2484]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.442739 kernel: audit: type=1325 audit(1719332777.415:294): table=filter:31 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.456743 kernel: audit: type=1300 audit(1719332777.415:294): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce1f214d0 a2=0 a3=7f32d134fe90 items=0 ppid=2473 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.415000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce1f214d0 a2=0 a3=7f32d134fe90 items=0 ppid=2473 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:26:17.463512 kubelet[2473]: I0625 16:26:17.457079 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:26:17.463512 kubelet[2473]: I0625 16:26:17.458757 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:26:17.463512 kubelet[2473]: I0625 16:26:17.458784 2473 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:26:17.463512 kubelet[2473]: I0625 16:26:17.458804 2473 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:26:17.463512 kubelet[2473]: E0625 16:26:17.458847 2473 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:26:17.463512 kubelet[2473]: W0625 16:26:17.460216 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.463512 kubelet[2473]: E0625 16:26:17.460273 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:17.463855 kernel: audit: type=1327 audit(1719332777.415:294): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:26:17.428000 audit[2488]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.428000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd17a08f30 a2=0 a3=7ff9465d4e90 items=0 ppid=2473 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.481688 kernel: audit: type=1325 audit(1719332777.428:295): table=filter:32 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.481767 kernel: audit: type=1300 audit(1719332777.428:295): arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd17a08f30 a2=0 a3=7ff9465d4e90 items=0 ppid=2473 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.483061 kubelet[2473]: I0625 16:26:17.483040 2473 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:26:17.483220 kubelet[2473]: I0625 16:26:17.483208 2473 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:26:17.483331 kubelet[2473]: I0625 16:26:17.483323 2473 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:17.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:17.488859 kernel: audit: type=1327 audit(1719332777.428:295): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:17.435000 audit[2492]: NETFILTER_CFG table=filter:33 family=2 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.494667 kernel: audit: type=1325 audit(1719332777.435:296): table=filter:33 family=2 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.435000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc0d2ed710 a2=0 a3=7f64cd2e0e90 items=0 ppid=2473 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:17.455000 audit[2496]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.455000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe9b26e930 a2=0 a3=7fdca072ee90 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:26:17.455000 audit[2497]: NETFILTER_CFG table=mangle:35 family=10 entries=2 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:17.455000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe619282d0 a2=0 a3=7f1030d36e90 items=0 ppid=2473 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:26:17.455000 audit[2498]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.455000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb566f9c0 a2=0 a3=7f2f3d3bae90 items=0 ppid=2473 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:26:17.461000 audit[2499]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:17.461000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7736f1d0 a2=0 a3=7f05004b0e90 items=0 ppid=2473 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:26:17.462000 audit[2500]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.462000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe29f2e230 a2=0 a3=7fa03c9b2e90 items=0 ppid=2473 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:26:17.462000 audit[2501]: NETFILTER_CFG table=nat:39 family=10 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:17.462000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc1a8c8060 a2=0 a3=7f861e92ce90 items=0 ppid=2473 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:26:17.462000 audit[2502]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:17.462000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff489c41e0 a2=0 a3=7fd4d03c1e90 items=0 ppid=2473 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:26:17.462000 audit[2503]: NETFILTER_CFG table=filter:41 family=10 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:17.462000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd9717b020 a2=0 a3=7f58d7645e90 items=0 ppid=2473 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:17.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:26:17.508328 kubelet[2473]: I0625 16:26:17.508306 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:17.508695 kubelet[2473]: E0625 16:26:17.508677 2473 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:17.559936 kubelet[2473]: E0625 16:26:17.559883 2473 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:26:17.607842 kubelet[2473]: E0625 16:26:17.607714 2473 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Jun 25 16:26:17.710752 kubelet[2473]: I0625 16:26:17.710726 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:17.711200 kubelet[2473]: E0625 16:26:17.711177 2473 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:17.760449 kubelet[2473]: E0625 16:26:17.760392 2473 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:26:18.008298 kubelet[2473]: E0625 16:26:18.008257 2473 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Jun 25 16:26:18.113739 kubelet[2473]: I0625 16:26:18.113701 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.114133 kubelet[2473]: E0625 16:26:18.114109 2473 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.161353 kubelet[2473]: E0625 16:26:18.161292 2473 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:26:18.266502 kubelet[2473]: W0625 16:26:18.266369 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.266502 kubelet[2473]: E0625 16:26:18.266420 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.590544 kubelet[2473]: W0625 16:26:18.520683 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.590544 kubelet[2473]: E0625 16:26:18.520755 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.594675 kubelet[2473]: I0625 16:26:18.594638 2473 policy_none.go:49] "None policy: Start" Jun 25 16:26:18.595790 kubelet[2473]: I0625 16:26:18.595718 2473 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:26:18.595790 kubelet[2473]: I0625 16:26:18.595765 2473 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:26:18.606080 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:26:18.616148 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:26:18.619093 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:26:18.628365 kubelet[2473]: I0625 16:26:18.628340 2473 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:26:18.629224 kubelet[2473]: I0625 16:26:18.629208 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:26:18.630445 kubelet[2473]: W0625 16:26:18.630386 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-a46e2cd05c&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.630604 kubelet[2473]: E0625 16:26:18.630579 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-a46e2cd05c&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.630930 kubelet[2473]: E0625 16:26:18.630866 2473 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-a46e2cd05c\" not found" Jun 25 16:26:18.751751 kubelet[2473]: W0625 16:26:18.751698 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.751751 kubelet[2473]: E0625 16:26:18.751750 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:18.809264 kubelet[2473]: E0625 16:26:18.809228 2473 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Jun 25 16:26:18.915923 kubelet[2473]: I0625 16:26:18.915888 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.916327 kubelet[2473]: E0625 16:26:18.916298 2473 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.961604 kubelet[2473]: I0625 16:26:18.961540 2473 topology_manager.go:215] "Topology Admit Handler" podUID="4a8cac472ce10d5f0c073f465ad87c4b" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.963502 kubelet[2473]: I0625 16:26:18.963473 2473 topology_manager.go:215] "Topology Admit Handler" podUID="1a004769106948959b725ac3adfffcc4" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.965669 kubelet[2473]: I0625 16:26:18.965641 2473 topology_manager.go:215] "Topology Admit Handler" podUID="a617164dee56ad05f4096a408f719e57" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:18.972236 systemd[1]: Created slice kubepods-burstable-pod4a8cac472ce10d5f0c073f465ad87c4b.slice - libcontainer container kubepods-burstable-pod4a8cac472ce10d5f0c073f465ad87c4b.slice. Jun 25 16:26:18.986517 systemd[1]: Created slice kubepods-burstable-poda617164dee56ad05f4096a408f719e57.slice - libcontainer container kubepods-burstable-poda617164dee56ad05f4096a408f719e57.slice. Jun 25 16:26:18.990281 systemd[1]: Created slice kubepods-burstable-pod1a004769106948959b725ac3adfffcc4.slice - libcontainer container kubepods-burstable-pod1a004769106948959b725ac3adfffcc4.slice. Jun 25 16:26:19.015608 kubelet[2473]: I0625 16:26:19.015561 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016044 kubelet[2473]: I0625 16:26:19.015632 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016044 kubelet[2473]: I0625 16:26:19.015659 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016044 kubelet[2473]: I0625 16:26:19.015685 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016044 kubelet[2473]: I0625 16:26:19.015713 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016044 kubelet[2473]: I0625 16:26:19.015740 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016201 kubelet[2473]: I0625 16:26:19.015773 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016201 kubelet[2473]: I0625 16:26:19.015801 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.016201 kubelet[2473]: I0625 16:26:19.015829 2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a004769106948959b725ac3adfffcc4-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-a46e2cd05c\" (UID: \"1a004769106948959b725ac3adfffcc4\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:19.285152 containerd[1477]: time="2024-06-25T16:26:19.284717653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-a46e2cd05c,Uid:4a8cac472ce10d5f0c073f465ad87c4b,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:19.290283 containerd[1477]: time="2024-06-25T16:26:19.290244063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-a46e2cd05c,Uid:a617164dee56ad05f4096a408f719e57,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:19.293078 containerd[1477]: time="2024-06-25T16:26:19.293045865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-a46e2cd05c,Uid:1a004769106948959b725ac3adfffcc4,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:19.415235 kubelet[2473]: E0625 16:26:19.415188 2473 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:19.970243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199414806.mount: Deactivated successfully. Jun 25 16:26:19.999870 containerd[1477]: time="2024-06-25T16:26:19.999816383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.002199 containerd[1477]: time="2024-06-25T16:26:20.002147621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 16:26:20.004992 containerd[1477]: time="2024-06-25T16:26:20.004955028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.006880 containerd[1477]: time="2024-06-25T16:26:20.006835199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:26:20.010214 containerd[1477]: time="2024-06-25T16:26:20.010180969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.012777 containerd[1477]: time="2024-06-25T16:26:20.012743992Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.015766 containerd[1477]: time="2024-06-25T16:26:20.015732886Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.018117 containerd[1477]: time="2024-06-25T16:26:20.018084025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.021222 containerd[1477]: time="2024-06-25T16:26:20.021188511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.023973 containerd[1477]: time="2024-06-25T16:26:20.023941522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.024733 containerd[1477]: time="2024-06-25T16:26:20.024700269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.840626ms" Jun 25 16:26:20.026614 containerd[1477]: time="2024-06-25T16:26:20.026564141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:26:20.030142 containerd[1477]: time="2024-06-25T16:26:20.030105397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.031216 containerd[1477]: time="2024-06-25T16:26:20.031159825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 740.821268ms" Jun 25 16:26:20.047202 containerd[1477]: time="2024-06-25T16:26:20.047150424Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.068176 containerd[1477]: time="2024-06-25T16:26:20.068115082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.088840 containerd[1477]: time="2024-06-25T16:26:20.088781160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:20.089600 containerd[1477]: time="2024-06-25T16:26:20.089545807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 796.417848ms" Jun 25 16:26:20.408018 kubelet[2473]: W0625 16:26:20.407982 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.408018 kubelet[2473]: E0625 16:26:20.408022 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.410320 kubelet[2473]: E0625 16:26:20.410292 2473 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="3.2s" Jun 25 16:26:20.518740 kubelet[2473]: I0625 16:26:20.518709 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:20.519111 kubelet[2473]: E0625 16:26:20.519087 2473 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:20.653617 kubelet[2473]: W0625 16:26:20.653497 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.653793 kubelet[2473]: E0625 16:26:20.653630 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.714640 kubelet[2473]: W0625 16:26:20.714503 2473 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.714640 kubelet[2473]: E0625 16:26:20.714548 2473 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jun 25 16:26:20.773469 containerd[1477]: time="2024-06-25T16:26:20.773363850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:20.774007 containerd[1477]: time="2024-06-25T16:26:20.773965009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.774161 containerd[1477]: time="2024-06-25T16:26:20.774134197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:20.774286 containerd[1477]: time="2024-06-25T16:26:20.774260888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.775000 containerd[1477]: time="2024-06-25T16:26:20.774915743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:20.775133 containerd[1477]: time="2024-06-25T16:26:20.774989338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.775133 containerd[1477]: time="2024-06-25T16:26:20.775031335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:20.775133 containerd[1477]: time="2024-06-25T16:26:20.775050934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.781620 containerd[1477]: time="2024-06-25T16:26:20.781543787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:20.781747 containerd[1477]: time="2024-06-25T16:26:20.781629281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.781747 containerd[1477]: time="2024-06-25T16:26:20.781649780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:20.781747 containerd[1477]: time="2024-06-25T16:26:20.781663079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:20.800853 systemd[1]: Started cri-containerd-af98565ff542c4b7bdb2f6c415031b0f94f2f81719930676dbca332b29bdf720.scope - libcontainer container af98565ff542c4b7bdb2f6c415031b0f94f2f81719930676dbca332b29bdf720. Jun 25 16:26:20.827774 systemd[1]: Started cri-containerd-a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d.scope - libcontainer container a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d. Jun 25 16:26:20.830831 systemd[1]: Started cri-containerd-78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921.scope - libcontainer container 78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921. Jun 25 16:26:20.832000 audit: BPF prog-id=86 op=LOAD Jun 25 16:26:20.835000 audit: BPF prog-id=87 op=LOAD Jun 25 16:26:20.835000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393835363566663534326334623762646232663663343135303331 Jun 25 16:26:20.835000 audit: BPF prog-id=88 op=LOAD Jun 25 16:26:20.835000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393835363566663534326334623762646232663663343135303331 Jun 25 16:26:20.837000 audit: BPF prog-id=88 op=UNLOAD Jun 25 16:26:20.837000 audit: BPF prog-id=87 op=UNLOAD Jun 25 16:26:20.837000 audit: BPF prog-id=89 op=LOAD Jun 25 16:26:20.837000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393835363566663534326334623762646232663663343135303331 Jun 25 16:26:20.845000 audit: BPF prog-id=90 op=LOAD Jun 25 16:26:20.845000 audit: BPF prog-id=91 op=LOAD Jun 25 16:26:20.845000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2531 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646430386130613563316162303930393531666437376338346163 Jun 25 16:26:20.846000 audit: BPF prog-id=92 op=LOAD Jun 25 16:26:20.846000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2531 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646430386130613563316162303930393531666437376338346163 Jun 25 16:26:20.846000 audit: BPF prog-id=92 op=UNLOAD Jun 25 16:26:20.846000 audit: BPF prog-id=91 op=UNLOAD Jun 25 16:26:20.846000 audit: BPF prog-id=93 op=LOAD Jun 25 16:26:20.846000 audit[2565]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2531 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646430386130613563316162303930393531666437376338346163 Jun 25 16:26:20.850000 audit: BPF prog-id=94 op=LOAD Jun 25 16:26:20.851000 audit: BPF prog-id=95 op=LOAD Jun 25 16:26:20.851000 audit[2573]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2534 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643362393062616535333561313039313463663438303464373066 Jun 25 16:26:20.851000 audit: BPF prog-id=96 op=LOAD Jun 25 16:26:20.851000 audit[2573]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2534 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643362393062616535333561313039313463663438303464373066 Jun 25 16:26:20.852000 audit: BPF prog-id=96 op=UNLOAD Jun 25 16:26:20.852000 audit: BPF prog-id=95 op=UNLOAD Jun 25 16:26:20.852000 audit: BPF prog-id=97 op=LOAD Jun 25 16:26:20.852000 audit[2573]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2534 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:20.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643362393062616535333561313039313463663438303464373066 Jun 25 16:26:20.902208 kubelet[2473]: E0625 16:26:20.902063 2473 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-a46e2cd05c.17dc4c0d36ecf258", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-a46e2cd05c", UID:"ci-3815.2.4-a-a46e2cd05c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-a46e2cd05c"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 26, 17, 388470872, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 26, 17, 388470872, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-a46e2cd05c"}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:26:20.904944 containerd[1477]: time="2024-06-25T16:26:20.904899599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-a46e2cd05c,Uid:a617164dee56ad05f4096a408f719e57,Namespace:kube-system,Attempt:0,} returns sandbox id \"af98565ff542c4b7bdb2f6c415031b0f94f2f81719930676dbca332b29bdf720\"" Jun 25 16:26:20.910900 containerd[1477]: time="2024-06-25T16:26:20.910855089Z" level=info msg="CreateContainer within sandbox \"af98565ff542c4b7bdb2f6c415031b0f94f2f81719930676dbca332b29bdf720\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:26:20.925770 containerd[1477]: time="2024-06-25T16:26:20.925723366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-a46e2cd05c,Uid:4a8cac472ce10d5f0c073f465ad87c4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d\"" Jun 25 16:26:20.926051 containerd[1477]: time="2024-06-25T16:26:20.925850057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-a46e2cd05c,Uid:1a004769106948959b725ac3adfffcc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921\"" Jun 25 16:26:20.928691 containerd[1477]: time="2024-06-25T16:26:20.928647964Z" level=info msg="CreateContainer within sandbox \"78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:26:20.929024 containerd[1477]: time="2024-06-25T16:26:20.928965443Z" level=info msg="CreateContainer within sandbox \"a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:26:21.080868 containerd[1477]: time="2024-06-25T16:26:21.079110944Z" level=info msg="CreateContainer within sandbox \"af98565ff542c4b7bdb2f6c415031b0f94f2f81719930676dbca332b29bdf720\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2352d1b84a3c0ffdb43e9aede485a263b32bd83bf6c7b21027e4ac58db106721\"" Jun 25 16:26:21.080868 containerd[1477]: time="2024-06-25T16:26:21.080319863Z" level=info msg="StartContainer for \"2352d1b84a3c0ffdb43e9aede485a263b32bd83bf6c7b21027e4ac58db106721\"" Jun 25 16:26:21.094656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950348838.mount: Deactivated successfully. Jun 25 16:26:21.116769 systemd[1]: Started cri-containerd-2352d1b84a3c0ffdb43e9aede485a263b32bd83bf6c7b21027e4ac58db106721.scope - libcontainer container 2352d1b84a3c0ffdb43e9aede485a263b32bd83bf6c7b21027e4ac58db106721. Jun 25 16:26:21.125485 containerd[1477]: time="2024-06-25T16:26:21.122282547Z" level=info msg="CreateContainer within sandbox \"78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea\"" Jun 25 16:26:21.125485 containerd[1477]: time="2024-06-25T16:26:21.123159588Z" level=info msg="StartContainer for \"d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea\"" Jun 25 16:26:21.127123 containerd[1477]: time="2024-06-25T16:26:21.127076526Z" level=info msg="CreateContainer within sandbox \"a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0\"" Jun 25 16:26:21.127937 containerd[1477]: time="2024-06-25T16:26:21.127904370Z" level=info msg="StartContainer for \"28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0\"" Jun 25 16:26:21.130000 audit: BPF prog-id=98 op=LOAD Jun 25 16:26:21.131000 audit: BPF prog-id=99 op=LOAD Jun 25 16:26:21.131000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2527 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353264316238346133633066666462343365396165646534383561 Jun 25 16:26:21.131000 audit: BPF prog-id=100 op=LOAD Jun 25 16:26:21.131000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2527 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353264316238346133633066666462343365396165646534383561 Jun 25 16:26:21.131000 audit: BPF prog-id=100 op=UNLOAD Jun 25 16:26:21.131000 audit: BPF prog-id=99 op=UNLOAD Jun 25 16:26:21.132000 audit: BPF prog-id=101 op=LOAD Jun 25 16:26:21.132000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2527 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353264316238346133633066666462343365396165646534383561 Jun 25 16:26:21.178816 systemd[1]: Started cri-containerd-28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0.scope - libcontainer container 28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0. Jun 25 16:26:21.189091 containerd[1477]: time="2024-06-25T16:26:21.189037767Z" level=info msg="StartContainer for \"2352d1b84a3c0ffdb43e9aede485a263b32bd83bf6c7b21027e4ac58db106721\" returns successfully" Jun 25 16:26:21.203790 systemd[1]: Started cri-containerd-d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea.scope - libcontainer container d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea. Jun 25 16:26:21.210000 audit: BPF prog-id=102 op=LOAD Jun 25 16:26:21.211000 audit: BPF prog-id=103 op=LOAD Jun 25 16:26:21.211000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2531 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238653266626133383866356562663864326434373966353664323838 Jun 25 16:26:21.211000 audit: BPF prog-id=104 op=LOAD Jun 25 16:26:21.211000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2531 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238653266626133383866356562663864326434373966353664323838 Jun 25 16:26:21.211000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:26:21.211000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:26:21.211000 audit: BPF prog-id=105 op=LOAD Jun 25 16:26:21.211000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2531 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238653266626133383866356562663864326434373966353664323838 Jun 25 16:26:21.224000 audit: BPF prog-id=106 op=LOAD Jun 25 16:26:21.224000 audit: BPF prog-id=107 op=LOAD Jun 25 16:26:21.224000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2534 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438336364373630646432336138646461633737633266306431356666 Jun 25 16:26:21.224000 audit: BPF prog-id=108 op=LOAD Jun 25 16:26:21.224000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2534 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438336364373630646432336138646461633737633266306431356666 Jun 25 16:26:21.224000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:26:21.224000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:26:21.224000 audit: BPF prog-id=109 op=LOAD Jun 25 16:26:21.224000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2534 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:21.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438336364373630646432336138646461633737633266306431356666 Jun 25 16:26:21.286239 containerd[1477]: time="2024-06-25T16:26:21.286187148Z" level=info msg="StartContainer for \"28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0\" returns successfully" Jun 25 16:26:21.286508 containerd[1477]: time="2024-06-25T16:26:21.286187148Z" level=info msg="StartContainer for \"d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea\" returns successfully" Jun 25 16:26:23.342694 kernel: kauditd_printk_skb: 98 callbacks suppressed Jun 25 16:26:23.342900 kernel: audit: type=1400 audit(1719332783.337:341): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.337000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.337000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c00114eb40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:23.371675 kernel: audit: type=1300 audit(1719332783.337:341): arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c00114eb40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:23.337000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:23.387704 kernel: audit: type=1327 audit(1719332783.337:341): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:23.338000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.402686 kernel: audit: type=1400 audit(1719332783.338:342): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.338000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c0005ff920 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:23.338000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:23.433885 kernel: audit: type=1300 audit(1719332783.338:342): arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c0005ff920 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:23.434059 kernel: audit: type=1327 audit(1719332783.338:342): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:23.600000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.613666 kernel: audit: type=1400 audit(1719332783.600:343): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.600000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00621f2c0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.633619 kernel: audit: type=1300 audit(1719332783.600:343): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00621f2c0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.600000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.648665 kernel: audit: type=1327 audit(1719332783.600:343): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0078e3bc0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.604000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.604000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00621f3e0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.604000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.618000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.618000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c004364ab0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.618000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.653000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.653000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=50 a1=c006aa36e0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.653000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.653000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.653000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=50 a1=c004364fc0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:26:23.660681 kernel: audit: type=1400 audit(1719332783.601:344): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:23.653000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:23.721951 kubelet[2473]: I0625 16:26:23.721922 2473 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:23.725341 kubelet[2473]: E0625 16:26:23.725311 2473 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.4-a-a46e2cd05c\" not found" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:23.736038 kubelet[2473]: I0625 16:26:23.736011 2473 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:24.388195 kubelet[2473]: I0625 16:26:24.388152 2473 apiserver.go:52] "Watching apiserver" Jun 25 16:26:24.406824 kubelet[2473]: I0625 16:26:24.406790 2473 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:26:26.572096 systemd[1]: Reloading. Jun 25 16:26:26.772840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:26.852000 audit: BPF prog-id=110 op=LOAD Jun 25 16:26:26.852000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:26:26.853000 audit: BPF prog-id=111 op=LOAD Jun 25 16:26:26.853000 audit: BPF prog-id=94 op=UNLOAD Jun 25 16:26:26.856000 audit: BPF prog-id=112 op=LOAD Jun 25 16:26:26.856000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:26:26.857000 audit: BPF prog-id=113 op=LOAD Jun 25 16:26:26.857000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:26:26.857000 audit: BPF prog-id=114 op=LOAD Jun 25 16:26:26.857000 audit: BPF prog-id=115 op=LOAD Jun 25 16:26:26.857000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:26:26.857000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:26:26.858000 audit: BPF prog-id=116 op=LOAD Jun 25 16:26:26.858000 audit: BPF prog-id=86 op=UNLOAD Jun 25 16:26:26.858000 audit: BPF prog-id=117 op=LOAD Jun 25 16:26:26.858000 audit: BPF prog-id=77 op=UNLOAD Jun 25 16:26:26.859000 audit: BPF prog-id=118 op=LOAD Jun 25 16:26:26.859000 audit: BPF prog-id=102 op=UNLOAD Jun 25 16:26:26.860000 audit: BPF prog-id=119 op=LOAD Jun 25 16:26:26.860000 audit: BPF prog-id=120 op=LOAD Jun 25 16:26:26.860000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:26:26.860000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:26:26.862000 audit: BPF prog-id=121 op=LOAD Jun 25 16:26:26.862000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:26:26.862000 audit: BPF prog-id=122 op=LOAD Jun 25 16:26:26.862000 audit: BPF prog-id=123 op=LOAD Jun 25 16:26:26.862000 audit: BPF prog-id=81 op=UNLOAD Jun 25 16:26:26.862000 audit: BPF prog-id=82 op=UNLOAD Jun 25 16:26:26.864000 audit: BPF prog-id=124 op=LOAD Jun 25 16:26:26.864000 audit: BPF prog-id=83 op=UNLOAD Jun 25 16:26:26.864000 audit: BPF prog-id=125 op=LOAD Jun 25 16:26:26.864000 audit: BPF prog-id=126 op=LOAD Jun 25 16:26:26.864000 audit: BPF prog-id=84 op=UNLOAD Jun 25 16:26:26.864000 audit: BPF prog-id=85 op=UNLOAD Jun 25 16:26:26.865000 audit: BPF prog-id=127 op=LOAD Jun 25 16:26:26.865000 audit: BPF prog-id=98 op=UNLOAD Jun 25 16:26:26.866000 audit: BPF prog-id=128 op=LOAD Jun 25 16:26:26.866000 audit: BPF prog-id=106 op=UNLOAD Jun 25 16:26:26.867000 audit: BPF prog-id=129 op=LOAD Jun 25 16:26:26.867000 audit: BPF prog-id=90 op=UNLOAD Jun 25 16:26:26.883885 kubelet[2473]: I0625 16:26:26.883644 2473 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:26:26.883810 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:26.904938 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:26:26.905215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:26.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:26.909999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:27.022067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:27.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:27.038000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:27.038000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0008f75e0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:27.038000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:27.040000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:27.040000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0008f77a0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:27.040000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:27.042000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:27.042000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0008f7960 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:27.042000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c7af80 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:27.520160 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:27.520677 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:26:27.520772 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:27.521789 kubelet[2831]: I0625 16:26:27.521743 2831 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:26:27.530426 kubelet[2831]: I0625 16:26:27.527959 2831 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:26:27.530426 kubelet[2831]: I0625 16:26:27.527989 2831 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:26:27.530426 kubelet[2831]: I0625 16:26:27.528323 2831 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:26:27.531055 kubelet[2831]: I0625 16:26:27.530837 2831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:26:27.532055 kubelet[2831]: I0625 16:26:27.532036 2831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.545745 2831 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.546188 2831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.546854 2831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.546887 2831 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.546901 2831 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:26:27.551659 kubelet[2831]: I0625 16:26:27.547046 2831 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:27.552100 kubelet[2831]: I0625 16:26:27.547353 2831 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:26:27.552100 kubelet[2831]: I0625 16:26:27.547371 2831 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:26:27.552100 kubelet[2831]: I0625 16:26:27.548772 2831 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:26:27.559659 kubelet[2831]: I0625 16:26:27.548799 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:26:27.563404 kubelet[2831]: I0625 16:26:27.563387 2831 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:26:27.565766 kubelet[2831]: I0625 16:26:27.565747 2831 server.go:1232] "Started kubelet" Jun 25 16:26:27.567944 kubelet[2831]: I0625 16:26:27.567351 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:26:27.572581 kubelet[2831]: I0625 16:26:27.572504 2831 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:26:27.573666 kubelet[2831]: I0625 16:26:27.573649 2831 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:26:27.575477 kubelet[2831]: I0625 16:26:27.575453 2831 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:26:27.575722 kubelet[2831]: I0625 16:26:27.575702 2831 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:26:27.577094 kubelet[2831]: E0625 16:26:27.577074 2831 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:26:27.577211 kubelet[2831]: E0625 16:26:27.577199 2831 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:26:27.577623 kubelet[2831]: I0625 16:26:27.577610 2831 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:26:27.577927 kubelet[2831]: I0625 16:26:27.577910 2831 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:26:27.578144 kubelet[2831]: I0625 16:26:27.578130 2831 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:26:27.589737 kubelet[2831]: I0625 16:26:27.589695 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:26:27.594246 kubelet[2831]: I0625 16:26:27.594225 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:26:27.594389 kubelet[2831]: I0625 16:26:27.594377 2831 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:26:27.595949 kubelet[2831]: I0625 16:26:27.595916 2831 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:26:27.596122 kubelet[2831]: E0625 16:26:27.596105 2831 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:26:27.646109 kubelet[2831]: I0625 16:26:27.646077 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:26:27.646109 kubelet[2831]: I0625 16:26:27.646101 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:26:27.646109 kubelet[2831]: I0625 16:26:27.646120 2831 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:27.646362 kubelet[2831]: I0625 16:26:27.646285 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:26:27.646362 kubelet[2831]: I0625 16:26:27.646312 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:26:27.646362 kubelet[2831]: I0625 16:26:27.646321 2831 policy_none.go:49] "None policy: Start" Jun 25 16:26:27.647081 kubelet[2831]: I0625 16:26:27.646925 2831 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:26:27.647081 kubelet[2831]: I0625 16:26:27.646953 2831 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:26:27.647227 kubelet[2831]: I0625 16:26:27.647127 2831 state_mem.go:75] "Updated machine memory state" Jun 25 16:26:27.650984 kubelet[2831]: I0625 16:26:27.650967 2831 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:26:27.652786 kubelet[2831]: I0625 16:26:27.652717 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:26:27.680688 kubelet[2831]: I0625 16:26:27.680657 2831 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.695409 kubelet[2831]: I0625 16:26:27.695373 2831 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.695601 kubelet[2831]: I0625 16:26:27.695467 2831 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.696408 kubelet[2831]: I0625 16:26:27.696384 2831 topology_manager.go:215] "Topology Admit Handler" podUID="a617164dee56ad05f4096a408f719e57" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.696511 kubelet[2831]: I0625 16:26:27.696495 2831 topology_manager.go:215] "Topology Admit Handler" podUID="4a8cac472ce10d5f0c073f465ad87c4b" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.696559 kubelet[2831]: I0625 16:26:27.696542 2831 topology_manager.go:215] "Topology Admit Handler" podUID="1a004769106948959b725ac3adfffcc4" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.707374 kubelet[2831]: W0625 16:26:27.707354 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:27.712781 kubelet[2831]: W0625 16:26:27.712761 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:27.713956 kubelet[2831]: W0625 16:26:27.713797 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:27.879575 kubelet[2831]: I0625 16:26:27.879388 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.879806 kubelet[2831]: I0625 16:26:27.879612 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.879806 kubelet[2831]: I0625 16:26:27.879649 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a004769106948959b725ac3adfffcc4-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-a46e2cd05c\" (UID: \"1a004769106948959b725ac3adfffcc4\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.879806 kubelet[2831]: I0625 16:26:27.879794 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.880048 kubelet[2831]: I0625 16:26:27.879856 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.880048 kubelet[2831]: I0625 16:26:27.879960 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a617164dee56ad05f4096a408f719e57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-a46e2cd05c\" (UID: \"a617164dee56ad05f4096a408f719e57\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.880048 kubelet[2831]: I0625 16:26:27.880000 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.880340 kubelet[2831]: I0625 16:26:27.880056 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:27.882611 kubelet[2831]: I0625 16:26:27.881244 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a8cac472ce10d5f0c073f465ad87c4b-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-a46e2cd05c\" (UID: \"4a8cac472ce10d5f0c073f465ad87c4b\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" Jun 25 16:26:28.562183 kubelet[2831]: I0625 16:26:28.562139 2831 apiserver.go:52] "Watching apiserver" Jun 25 16:26:28.578959 kubelet[2831]: I0625 16:26:28.578921 2831 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:26:28.644941 kubelet[2831]: I0625 16:26:28.644301 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-a46e2cd05c" podStartSLOduration=1.644240844 podCreationTimestamp="2024-06-25 16:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:28.643523885 +0000 UTC m=+1.616192362" watchObservedRunningTime="2024-06-25 16:26:28.644240844 +0000 UTC m=+1.616909421" Jun 25 16:26:28.667417 kubelet[2831]: I0625 16:26:28.667384 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-a46e2cd05c" podStartSLOduration=1.667337238 podCreationTimestamp="2024-06-25 16:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:28.659243396 +0000 UTC m=+1.631911973" watchObservedRunningTime="2024-06-25 16:26:28.667337238 +0000 UTC m=+1.640005815" Jun 25 16:26:28.667784 kubelet[2831]: I0625 16:26:28.667765 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-a46e2cd05c" podStartSLOduration=1.6677273160000001 podCreationTimestamp="2024-06-25 16:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:28.666786769 +0000 UTC m=+1.639455246" watchObservedRunningTime="2024-06-25 16:26:28.667727316 +0000 UTC m=+1.640395893" Jun 25 16:26:29.615886 kernel: kauditd_printk_skb: 68 callbacks suppressed Jun 25 16:26:29.616039 kernel: audit: type=1400 audit(1719332789.606:395): avc: denied { watch } for pid=2707 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=520996 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:26:29.606000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=520996 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:26:29.606000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000afaf40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:29.651757 kernel: audit: type=1300 audit(1719332789.606:395): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000afaf40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:26:29.665759 kernel: audit: type=1327 audit(1719332789.606:395): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:29.606000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:34.740009 sudo[2013]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:34.738000 audit[2013]: USER_END pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.738000 audit[2013]: CRED_DISP pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.750606 kernel: audit: type=1106 audit(1719332794.738:396): pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.750661 kernel: audit: type=1104 audit(1719332794.738:397): pid=2013 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.844092 sshd[2010]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:34.844000 audit[2010]: USER_END pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:26:34.847807 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:42224.service: Deactivated successfully. Jun 25 16:26:34.848549 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:26:34.848721 systemd[1]: session-9.scope: Consumed 4.341s CPU time. Jun 25 16:26:34.849931 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:26:34.850960 systemd-logind[1471]: Removed session 9. Jun 25 16:26:34.844000 audit[2010]: CRED_DISP pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:26:34.869053 kernel: audit: type=1106 audit(1719332794.844:398): pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:26:34.869161 kernel: audit: type=1104 audit(1719332794.844:399): pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:26:34.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.4:22-10.200.16.10:42224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.881857 kernel: audit: type=1131 audit(1719332794.844:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.4:22-10.200.16.10:42224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.598962 kubelet[2831]: I0625 16:26:40.598922 2831 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:26:40.599461 containerd[1477]: time="2024-06-25T16:26:40.599314036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:26:40.599792 kubelet[2831]: I0625 16:26:40.599637 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:26:40.730519 kubelet[2831]: I0625 16:26:40.730467 2831 topology_manager.go:215] "Topology Admit Handler" podUID="de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4" podNamespace="kube-system" podName="kube-proxy-d5ktb" Jun 25 16:26:40.736671 systemd[1]: Created slice kubepods-besteffort-podde15ab4b_2a5a_48f0_b8ad_a26618e5cdd4.slice - libcontainer container kubepods-besteffort-podde15ab4b_2a5a_48f0_b8ad_a26618e5cdd4.slice. Jun 25 16:26:40.771014 kubelet[2831]: I0625 16:26:40.770961 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskmj\" (UniqueName: \"kubernetes.io/projected/de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4-kube-api-access-bskmj\") pod \"kube-proxy-d5ktb\" (UID: \"de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4\") " pod="kube-system/kube-proxy-d5ktb" Jun 25 16:26:40.771217 kubelet[2831]: I0625 16:26:40.771031 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4-xtables-lock\") pod \"kube-proxy-d5ktb\" (UID: \"de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4\") " pod="kube-system/kube-proxy-d5ktb" Jun 25 16:26:40.771217 kubelet[2831]: I0625 16:26:40.771072 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4-lib-modules\") pod \"kube-proxy-d5ktb\" (UID: \"de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4\") " pod="kube-system/kube-proxy-d5ktb" Jun 25 16:26:40.771217 kubelet[2831]: I0625 16:26:40.771098 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4-kube-proxy\") pod \"kube-proxy-d5ktb\" (UID: \"de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4\") " pod="kube-system/kube-proxy-d5ktb" Jun 25 16:26:40.902148 kubelet[2831]: I0625 16:26:40.902101 2831 topology_manager.go:215] "Topology Admit Handler" podUID="11fcd378-70a1-49b9-ae12-a00650cba1f5" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-lkzzs" Jun 25 16:26:40.908791 systemd[1]: Created slice kubepods-besteffort-pod11fcd378_70a1_49b9_ae12_a00650cba1f5.slice - libcontainer container kubepods-besteffort-pod11fcd378_70a1_49b9_ae12_a00650cba1f5.slice. Jun 25 16:26:40.913569 kubelet[2831]: W0625 16:26:40.913542 2831 reflector.go:535] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:26:40.913807 kubelet[2831]: E0625 16:26:40.913795 2831 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:26:40.914396 kubelet[2831]: W0625 16:26:40.914369 2831 reflector.go:535] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:26:40.914551 kubelet[2831]: E0625 16:26:40.914534 2831 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:26:40.972180 kubelet[2831]: I0625 16:26:40.972134 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwjd\" (UniqueName: \"kubernetes.io/projected/11fcd378-70a1-49b9-ae12-a00650cba1f5-kube-api-access-8wwjd\") pod \"tigera-operator-76c4974c85-lkzzs\" (UID: \"11fcd378-70a1-49b9-ae12-a00650cba1f5\") " pod="tigera-operator/tigera-operator-76c4974c85-lkzzs" Jun 25 16:26:40.972180 kubelet[2831]: I0625 16:26:40.972185 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11fcd378-70a1-49b9-ae12-a00650cba1f5-var-lib-calico\") pod \"tigera-operator-76c4974c85-lkzzs\" (UID: \"11fcd378-70a1-49b9-ae12-a00650cba1f5\") " pod="tigera-operator/tigera-operator-76c4974c85-lkzzs" Jun 25 16:26:41.046351 containerd[1477]: time="2024-06-25T16:26:41.046296499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5ktb,Uid:de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:41.096138 containerd[1477]: time="2024-06-25T16:26:41.096049898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:41.096348 containerd[1477]: time="2024-06-25T16:26:41.096114595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:41.096348 containerd[1477]: time="2024-06-25T16:26:41.096139694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:41.096348 containerd[1477]: time="2024-06-25T16:26:41.096156693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:41.133773 systemd[1]: Started cri-containerd-df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446.scope - libcontainer container df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446. Jun 25 16:26:41.142000 audit: BPF prog-id=130 op=LOAD Jun 25 16:26:41.142000 audit: BPF prog-id=131 op=LOAD Jun 25 16:26:41.149028 kernel: audit: type=1334 audit(1719332801.142:401): prog-id=130 op=LOAD Jun 25 16:26:41.149095 kernel: audit: type=1334 audit(1719332801.142:402): prog-id=131 op=LOAD Jun 25 16:26:41.142000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2915 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.159678 kernel: audit: type=1300 audit(1719332801.142:402): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2915 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313331366238623932653730373531663131353564613838306339 Jun 25 16:26:41.171895 kernel: audit: type=1327 audit(1719332801.142:402): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313331366238623932653730373531663131353564613838306339 Jun 25 16:26:41.143000 audit: BPF prog-id=132 op=LOAD Jun 25 16:26:41.176776 kernel: audit: type=1334 audit(1719332801.143:403): prog-id=132 op=LOAD Jun 25 16:26:41.143000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2915 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.189093 kernel: audit: type=1300 audit(1719332801.143:403): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2915 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.189233 containerd[1477]: time="2024-06-25T16:26:41.187833822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5ktb,Uid:de15ab4b-2a5a-48f0-b8ad-a26618e5cdd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446\"" Jun 25 16:26:41.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313331366238623932653730373531663131353564613838306339 Jun 25 16:26:41.196806 containerd[1477]: time="2024-06-25T16:26:41.193708074Z" level=info msg="CreateContainer within sandbox \"df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:26:41.198770 kernel: audit: type=1327 audit(1719332801.143:403): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313331366238623932653730373531663131353564613838306339 Jun 25 16:26:41.201242 kernel: audit: type=1334 audit(1719332801.143:404): prog-id=132 op=UNLOAD Jun 25 16:26:41.143000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:26:41.143000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:26:41.206963 kernel: audit: type=1334 audit(1719332801.143:405): prog-id=131 op=UNLOAD Jun 25 16:26:41.143000 audit: BPF prog-id=133 op=LOAD Jun 25 16:26:41.210696 kernel: audit: type=1334 audit(1719332801.143:406): prog-id=133 op=LOAD Jun 25 16:26:41.143000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2915 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313331366238623932653730373531663131353564613838306339 Jun 25 16:26:41.239163 containerd[1477]: time="2024-06-25T16:26:41.239109357Z" level=info msg="CreateContainer within sandbox \"df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cfedd3b673e68d4ad9b7f4f82b7c428a30f6056901af9a975fff68a6b04ce9ee\"" Jun 25 16:26:41.241338 containerd[1477]: time="2024-06-25T16:26:41.239619836Z" level=info msg="StartContainer for \"cfedd3b673e68d4ad9b7f4f82b7c428a30f6056901af9a975fff68a6b04ce9ee\"" Jun 25 16:26:41.264767 systemd[1]: Started cri-containerd-cfedd3b673e68d4ad9b7f4f82b7c428a30f6056901af9a975fff68a6b04ce9ee.scope - libcontainer container cfedd3b673e68d4ad9b7f4f82b7c428a30f6056901af9a975fff68a6b04ce9ee. Jun 25 16:26:41.276000 audit: BPF prog-id=134 op=LOAD Jun 25 16:26:41.276000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2915 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656464336236373365363864346164396237663466383262376334 Jun 25 16:26:41.276000 audit: BPF prog-id=135 op=LOAD Jun 25 16:26:41.276000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2915 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656464336236373365363864346164396237663466383262376334 Jun 25 16:26:41.276000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:26:41.276000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:26:41.276000 audit: BPF prog-id=136 op=LOAD Jun 25 16:26:41.276000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2915 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656464336236373365363864346164396237663466383262376334 Jun 25 16:26:41.298984 containerd[1477]: time="2024-06-25T16:26:41.298066368Z" level=info msg="StartContainer for \"cfedd3b673e68d4ad9b7f4f82b7c428a30f6056901af9a975fff68a6b04ce9ee\" returns successfully" Jun 25 16:26:41.349000 audit[3009]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.349000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2fabcfa0 a2=0 a3=7ffc2fabcf8c items=0 ppid=2968 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:26:41.351000 audit[3010]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.351000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdc3c67c0 a2=0 a3=7fffdc3c67ac items=0 ppid=2968 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:26:41.355000 audit[3011]: NETFILTER_CFG table=mangle:44 family=2 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.355000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff85f685b0 a2=0 a3=7fff85f6859c items=0 ppid=2968 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:26:41.356000 audit[3012]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.356000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9a4ae0c0 a2=0 a3=7ffe9a4ae0ac items=0 ppid=2968 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:26:41.356000 audit[3013]: NETFILTER_CFG table=nat:46 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.356000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4e2acb80 a2=0 a3=7ffe4e2acb6c items=0 ppid=2968 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:26:41.358000 audit[3014]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.358000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0bb8b1b0 a2=0 a3=7fff0bb8b19c items=0 ppid=2968 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:26:41.452000 audit[3015]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.452000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc964d9700 a2=0 a3=7ffc964d96ec items=0 ppid=2968 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:26:41.456000 audit[3017]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.456000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff56492390 a2=0 a3=7fff5649237c items=0 ppid=2968 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:26:41.460000 audit[3020]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.460000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc4af73b20 a2=0 a3=7ffc4af73b0c items=0 ppid=2968 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:26:41.462000 audit[3021]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.462000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13faa9a0 a2=0 a3=7fff13faa98c items=0 ppid=2968 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:26:41.464000 audit[3023]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.464000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcaaf24360 a2=0 a3=7ffcaaf2434c items=0 ppid=2968 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:26:41.466000 audit[3024]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.466000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d5adca0 a2=0 a3=7ffc6d5adc8c items=0 ppid=2968 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:26:41.468000 audit[3026]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.468000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc122eb1a0 a2=0 a3=7ffc122eb18c items=0 ppid=2968 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:26:41.473000 audit[3029]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.473000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4724b490 a2=0 a3=7ffd4724b47c items=0 ppid=2968 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:26:41.474000 audit[3030]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.474000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffe744de0 a2=0 a3=7ffffe744dcc items=0 ppid=2968 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:26:41.477000 audit[3032]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.477000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb8bb6f30 a2=0 a3=7ffeb8bb6f1c items=0 ppid=2968 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:26:41.478000 audit[3033]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.478000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1b8e5d50 a2=0 a3=7fff1b8e5d3c items=0 ppid=2968 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:26:41.481000 audit[3035]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.481000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2aff2b90 a2=0 a3=7ffd2aff2b7c items=0 ppid=2968 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:26:41.485000 audit[3038]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.485000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5b36f450 a2=0 a3=7fff5b36f43c items=0 ppid=2968 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:26:41.489000 audit[3041]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.489000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf2383ba0 a2=0 a3=7ffcf2383b8c items=0 ppid=2968 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:26:41.490000 audit[3042]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.490000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedfa5c2a0 a2=0 a3=7ffedfa5c28c items=0 ppid=2968 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:26:41.493000 audit[3044]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.493000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcb66e9c20 a2=0 a3=7ffcb66e9c0c items=0 ppid=2968 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:26:41.496000 audit[3047]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.496000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6dd8edb0 a2=0 a3=7ffc6dd8ed9c items=0 ppid=2968 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:26:41.498000 audit[3048]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.498000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2f41d980 a2=0 a3=7ffe2f41d96c items=0 ppid=2968 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:26:41.500000 audit[3050]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:41.500000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff7965e980 a2=0 a3=7fff7965e96c items=0 ppid=2968 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:26:41.536000 audit[3056]: NETFILTER_CFG table=filter:67 family=2 entries=8 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.536000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd1c1b1520 a2=0 a3=7ffd1c1b150c items=0 ppid=2968 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.654000 audit[3056]: NETFILTER_CFG table=nat:68 family=2 entries=14 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.654000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd1c1b1520 a2=0 a3=7ffd1c1b150c items=0 ppid=2968 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.659000 audit[3062]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.659000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc16f8bbd0 a2=0 a3=7ffc16f8bbbc items=0 ppid=2968 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:26:41.662000 audit[3064]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.662000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd3facdb10 a2=0 a3=7ffd3facdafc items=0 ppid=2968 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:26:41.667000 audit[3067]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.667000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb5502320 a2=0 a3=7ffcb550230c items=0 ppid=2968 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:26:41.669000 audit[3068]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.669000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff1209980 a2=0 a3=7ffff120996c items=0 ppid=2968 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:26:41.671000 audit[3070]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.671000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc08462790 a2=0 a3=7ffc0846277c items=0 ppid=2968 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:26:41.673000 audit[3071]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.673000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc052ad400 a2=0 a3=7ffc052ad3ec items=0 ppid=2968 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:26:41.676000 audit[3073]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.676000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca60f3ef0 a2=0 a3=7ffca60f3edc items=0 ppid=2968 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:26:41.680000 audit[3076]: NETFILTER_CFG table=filter:76 family=10 entries=2 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.680000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffecbde6c70 a2=0 a3=7ffecbde6c5c items=0 ppid=2968 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:26:41.681000 audit[3077]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.681000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc07b87780 a2=0 a3=7ffc07b8776c items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:26:41.684000 audit[3079]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.684000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0beb81b0 a2=0 a3=7ffd0beb819c items=0 ppid=2968 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:26:41.685000 audit[3080]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.685000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff51c89940 a2=0 a3=7fff51c8992c items=0 ppid=2968 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:26:41.688000 audit[3082]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.688000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeed853750 a2=0 a3=7ffeed85373c items=0 ppid=2968 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:26:41.692000 audit[3085]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.692000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5eeb62e0 a2=0 a3=7ffc5eeb62cc items=0 ppid=2968 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:26:41.696000 audit[3088]: NETFILTER_CFG table=filter:82 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.696000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3ebcd4c0 a2=0 a3=7ffd3ebcd4ac items=0 ppid=2968 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:26:41.697000 audit[3089]: NETFILTER_CFG table=nat:83 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.697000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2b9658a0 a2=0 a3=7ffe2b96588c items=0 ppid=2968 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:26:41.700000 audit[3091]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.700000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc533c7210 a2=0 a3=7ffc533c71fc items=0 ppid=2968 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:26:41.704000 audit[3094]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.704000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd52222770 a2=0 a3=7ffd5222275c items=0 ppid=2968 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:26:41.705000 audit[3095]: NETFILTER_CFG table=nat:86 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.705000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5bce12a0 a2=0 a3=7ffe5bce128c items=0 ppid=2968 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:26:41.709000 audit[3097]: NETFILTER_CFG table=nat:87 family=10 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.709000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdaadb0840 a2=0 a3=7ffdaadb082c items=0 ppid=2968 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:26:41.710000 audit[3098]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.710000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c200940 a2=0 a3=7ffc0c20092c items=0 ppid=2968 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:26:41.713000 audit[3100]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.713000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3ffc6060 a2=0 a3=7ffc3ffc604c items=0 ppid=2968 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:41.717000 audit[3103]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:41.717000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff595707a0 a2=0 a3=7fff5957078c items=0 ppid=2968 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:41.720000 audit[3105]: NETFILTER_CFG table=filter:91 family=10 entries=3 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:26:41.720000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffce4bd4a00 a2=0 a3=7ffce4bd49ec items=0 ppid=2968 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.720000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.720000 audit[3105]: NETFILTER_CFG table=nat:92 family=10 entries=7 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:26:41.720000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffce4bd4a00 a2=0 a3=7ffce4bd49ec items=0 ppid=2968 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.720000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.896292 systemd[1]: run-containerd-runc-k8s.io-df1316b8b92e70751f1155da880c9e80318f7f50b3145e9105cabb0a8f4b8446-runc.XhO0qD.mount: Deactivated successfully. Jun 25 16:26:42.078520 kubelet[2831]: E0625 16:26:42.078390 2831 projected.go:292] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:26:42.078520 kubelet[2831]: E0625 16:26:42.078443 2831 projected.go:198] Error preparing data for projected volume kube-api-access-8wwjd for pod tigera-operator/tigera-operator-76c4974c85-lkzzs: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:26:42.079033 kubelet[2831]: E0625 16:26:42.078543 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/11fcd378-70a1-49b9-ae12-a00650cba1f5-kube-api-access-8wwjd podName:11fcd378-70a1-49b9-ae12-a00650cba1f5 nodeName:}" failed. No retries permitted until 2024-06-25 16:26:42.578505781 +0000 UTC m=+15.551174358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wwjd" (UniqueName: "kubernetes.io/projected/11fcd378-70a1-49b9-ae12-a00650cba1f5-kube-api-access-8wwjd") pod "tigera-operator-76c4974c85-lkzzs" (UID: "11fcd378-70a1-49b9-ae12-a00650cba1f5") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:26:42.718958 containerd[1477]: time="2024-06-25T16:26:42.718903002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-lkzzs,Uid:11fcd378-70a1-49b9-ae12-a00650cba1f5,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:26:42.763103 containerd[1477]: time="2024-06-25T16:26:42.763015877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:42.763103 containerd[1477]: time="2024-06-25T16:26:42.763064975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:42.763103 containerd[1477]: time="2024-06-25T16:26:42.763082275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:42.763371 containerd[1477]: time="2024-06-25T16:26:42.763325765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:42.789777 systemd[1]: Started cri-containerd-2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8.scope - libcontainer container 2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8. Jun 25 16:26:42.798000 audit: BPF prog-id=137 op=LOAD Jun 25 16:26:42.799000 audit: BPF prog-id=138 op=LOAD Jun 25 16:26:42.799000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3114 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:42.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643332386361353162316430376331393131663761356262656165 Jun 25 16:26:42.799000 audit: BPF prog-id=139 op=LOAD Jun 25 16:26:42.799000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3114 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:42.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643332386361353162316430376331393131663761356262656165 Jun 25 16:26:42.799000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:26:42.799000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:26:42.799000 audit: BPF prog-id=140 op=LOAD Jun 25 16:26:42.799000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3114 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:42.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643332386361353162316430376331393131663761356262656165 Jun 25 16:26:42.828115 containerd[1477]: time="2024-06-25T16:26:42.828074987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-lkzzs,Uid:11fcd378-70a1-49b9-ae12-a00650cba1f5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8\"" Jun 25 16:26:42.831685 containerd[1477]: time="2024-06-25T16:26:42.831571243Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:26:42.894171 systemd[1]: run-containerd-runc-k8s.io-2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8-runc.rHHPbx.mount: Deactivated successfully. Jun 25 16:26:45.261303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645641339.mount: Deactivated successfully. Jun 25 16:26:45.954146 containerd[1477]: time="2024-06-25T16:26:45.954090004Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:45.956759 containerd[1477]: time="2024-06-25T16:26:45.956699803Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076104" Jun 25 16:26:45.960112 containerd[1477]: time="2024-06-25T16:26:45.960076972Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:45.964816 containerd[1477]: time="2024-06-25T16:26:45.964786688Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:45.968532 containerd[1477]: time="2024-06-25T16:26:45.968497244Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:45.969396 containerd[1477]: time="2024-06-25T16:26:45.969354311Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.137705571s" Jun 25 16:26:45.969497 containerd[1477]: time="2024-06-25T16:26:45.969403009Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:26:45.971658 containerd[1477]: time="2024-06-25T16:26:45.971361933Z" level=info msg="CreateContainer within sandbox \"2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:26:46.006816 containerd[1477]: time="2024-06-25T16:26:46.006767561Z" level=info msg="CreateContainer within sandbox \"2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d\"" Jun 25 16:26:46.007401 containerd[1477]: time="2024-06-25T16:26:46.007362538Z" level=info msg="StartContainer for \"cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d\"" Jun 25 16:26:46.035773 systemd[1]: Started cri-containerd-cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d.scope - libcontainer container cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d. Jun 25 16:26:46.046000 audit: BPF prog-id=141 op=LOAD Jun 25 16:26:46.047000 audit: BPF prog-id=142 op=LOAD Jun 25 16:26:46.047000 audit[3160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3114 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:46.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362643565333935306466333331336135343930313831616436633464 Jun 25 16:26:46.047000 audit: BPF prog-id=143 op=LOAD Jun 25 16:26:46.047000 audit[3160]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3114 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:46.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362643565333935306466333331336135343930313831616436633464 Jun 25 16:26:46.047000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:26:46.047000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:26:46.047000 audit: BPF prog-id=144 op=LOAD Jun 25 16:26:46.047000 audit[3160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3114 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:46.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362643565333935306466333331336135343930313831616436633464 Jun 25 16:26:46.064066 containerd[1477]: time="2024-06-25T16:26:46.063994880Z" level=info msg="StartContainer for \"cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d\" returns successfully" Jun 25 16:26:46.668432 kubelet[2831]: I0625 16:26:46.667826 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-d5ktb" podStartSLOduration=6.667781371 podCreationTimestamp="2024-06-25 16:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:41.656824419 +0000 UTC m=+14.629492996" watchObservedRunningTime="2024-06-25 16:26:46.667781371 +0000 UTC m=+19.640449848" Jun 25 16:26:49.163000 audit[3193]: NETFILTER_CFG table=filter:93 family=2 entries=15 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.166232 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:26:49.166368 kernel: audit: type=1325 audit(1719332809.163:475): table=filter:93 family=2 entries=15 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.163000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6d9494a0 a2=0 a3=7fff6d94948c items=0 ppid=2968 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.173748 kernel: audit: type=1300 audit(1719332809.163:475): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6d9494a0 a2=0 a3=7fff6d94948c items=0 ppid=2968 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.186613 kernel: audit: type=1327 audit(1719332809.163:475): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.163000 audit[3193]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.197163 kernel: audit: type=1325 audit(1719332809.163:476): table=nat:94 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.163000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6d9494a0 a2=0 a3=0 items=0 ppid=2968 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.208955 kernel: audit: type=1300 audit(1719332809.163:476): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6d9494a0 a2=0 a3=0 items=0 ppid=2968 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.203000 audit[3195]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.217611 kernel: audit: type=1327 audit(1719332809.163:476): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.217663 kernel: audit: type=1325 audit(1719332809.203:477): table=filter:95 family=2 entries=16 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.203000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff41c87980 a2=0 a3=7fff41c8796c items=0 ppid=2968 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.235077 kernel: audit: type=1300 audit(1719332809.203:477): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff41c87980 a2=0 a3=7fff41c8796c items=0 ppid=2968 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.241694 kernel: audit: type=1327 audit(1719332809.203:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.203000 audit[3195]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.248048 kernel: audit: type=1325 audit(1719332809.203:478): table=nat:96 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:49.203000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff41c87980 a2=0 a3=0 items=0 ppid=2968 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:49.303176 kubelet[2831]: I0625 16:26:49.303112 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-lkzzs" podStartSLOduration=6.162807756 podCreationTimestamp="2024-06-25 16:26:40 +0000 UTC" firstStartedPulling="2024-06-25 16:26:42.829426331 +0000 UTC m=+15.802094908" lastFinishedPulling="2024-06-25 16:26:45.969666999 +0000 UTC m=+18.942335476" observedRunningTime="2024-06-25 16:26:46.669298613 +0000 UTC m=+19.641967090" watchObservedRunningTime="2024-06-25 16:26:49.303048324 +0000 UTC m=+22.275716801" Jun 25 16:26:49.303708 kubelet[2831]: I0625 16:26:49.303339 2831 topology_manager.go:215] "Topology Admit Handler" podUID="b019f5ab-fa3b-404a-a541-b066fa123b8e" podNamespace="calico-system" podName="calico-typha-5ffb96dddb-sm2l2" Jun 25 16:26:49.310566 systemd[1]: Created slice kubepods-besteffort-podb019f5ab_fa3b_404a_a541_b066fa123b8e.slice - libcontainer container kubepods-besteffort-podb019f5ab_fa3b_404a_a541_b066fa123b8e.slice. Jun 25 16:26:49.330220 kubelet[2831]: I0625 16:26:49.330185 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b019f5ab-fa3b-404a-a541-b066fa123b8e-typha-certs\") pod \"calico-typha-5ffb96dddb-sm2l2\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " pod="calico-system/calico-typha-5ffb96dddb-sm2l2" Jun 25 16:26:49.330389 kubelet[2831]: I0625 16:26:49.330257 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdpnc\" (UniqueName: \"kubernetes.io/projected/b019f5ab-fa3b-404a-a541-b066fa123b8e-kube-api-access-qdpnc\") pod \"calico-typha-5ffb96dddb-sm2l2\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " pod="calico-system/calico-typha-5ffb96dddb-sm2l2" Jun 25 16:26:49.330389 kubelet[2831]: I0625 16:26:49.330288 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b019f5ab-fa3b-404a-a541-b066fa123b8e-tigera-ca-bundle\") pod \"calico-typha-5ffb96dddb-sm2l2\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " pod="calico-system/calico-typha-5ffb96dddb-sm2l2" Jun 25 16:26:49.384566 kubelet[2831]: I0625 16:26:49.384520 2831 topology_manager.go:215] "Topology Admit Handler" podUID="b4146ffb-43cc-4c81-84c8-6e23adccb5cb" podNamespace="calico-system" podName="calico-node-6kgdg" Jun 25 16:26:49.392266 systemd[1]: Created slice kubepods-besteffort-podb4146ffb_43cc_4c81_84c8_6e23adccb5cb.slice - libcontainer container kubepods-besteffort-podb4146ffb_43cc_4c81_84c8_6e23adccb5cb.slice. Jun 25 16:26:49.430847 kubelet[2831]: I0625 16:26:49.430713 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-tigera-ca-bundle\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.430847 kubelet[2831]: I0625 16:26:49.430774 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-policysync\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.430847 kubelet[2831]: I0625 16:26:49.430806 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-run-calico\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.430847 kubelet[2831]: I0625 16:26:49.430844 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-log-dir\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431161 kubelet[2831]: I0625 16:26:49.430875 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-lib-calico\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431161 kubelet[2831]: I0625 16:26:49.430904 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-lib-modules\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431161 kubelet[2831]: I0625 16:26:49.430931 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-net-dir\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431161 kubelet[2831]: I0625 16:26:49.430972 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-xtables-lock\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431161 kubelet[2831]: I0625 16:26:49.430999 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-node-certs\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431359 kubelet[2831]: I0625 16:26:49.431024 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-bin-dir\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431359 kubelet[2831]: I0625 16:26:49.431054 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-flexvol-driver-host\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.431359 kubelet[2831]: I0625 16:26:49.431103 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnljn\" (UniqueName: \"kubernetes.io/projected/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-kube-api-access-cnljn\") pod \"calico-node-6kgdg\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " pod="calico-system/calico-node-6kgdg" Jun 25 16:26:49.501155 kubelet[2831]: I0625 16:26:49.501114 2831 topology_manager.go:215] "Topology Admit Handler" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" podNamespace="calico-system" podName="csi-node-driver-prpxb" Jun 25 16:26:49.501772 kubelet[2831]: E0625 16:26:49.501742 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:49.532443 kubelet[2831]: I0625 16:26:49.532398 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7nq\" (UniqueName: \"kubernetes.io/projected/8474323b-f265-4427-9f9e-fd6fa285383b-kube-api-access-dp7nq\") pod \"csi-node-driver-prpxb\" (UID: \"8474323b-f265-4427-9f9e-fd6fa285383b\") " pod="calico-system/csi-node-driver-prpxb" Jun 25 16:26:49.535092 kubelet[2831]: I0625 16:26:49.535039 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8474323b-f265-4427-9f9e-fd6fa285383b-socket-dir\") pod \"csi-node-driver-prpxb\" (UID: \"8474323b-f265-4427-9f9e-fd6fa285383b\") " pod="calico-system/csi-node-driver-prpxb" Jun 25 16:26:49.535777 kubelet[2831]: I0625 16:26:49.535757 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8474323b-f265-4427-9f9e-fd6fa285383b-kubelet-dir\") pod \"csi-node-driver-prpxb\" (UID: \"8474323b-f265-4427-9f9e-fd6fa285383b\") " pod="calico-system/csi-node-driver-prpxb" Jun 25 16:26:49.536017 kubelet[2831]: I0625 16:26:49.536005 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8474323b-f265-4427-9f9e-fd6fa285383b-varrun\") pod \"csi-node-driver-prpxb\" (UID: \"8474323b-f265-4427-9f9e-fd6fa285383b\") " pod="calico-system/csi-node-driver-prpxb" Jun 25 16:26:49.536121 kubelet[2831]: I0625 16:26:49.536112 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8474323b-f265-4427-9f9e-fd6fa285383b-registration-dir\") pod \"csi-node-driver-prpxb\" (UID: \"8474323b-f265-4427-9f9e-fd6fa285383b\") " pod="calico-system/csi-node-driver-prpxb" Jun 25 16:26:49.548492 kubelet[2831]: E0625 16:26:49.548471 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.548675 kubelet[2831]: W0625 16:26:49.548655 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.548799 kubelet[2831]: E0625 16:26:49.548784 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.561438 kubelet[2831]: E0625 16:26:49.561415 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.561644 kubelet[2831]: W0625 16:26:49.561625 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.561768 kubelet[2831]: E0625 16:26:49.561753 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.623341 containerd[1477]: time="2024-06-25T16:26:49.622835033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ffb96dddb-sm2l2,Uid:b019f5ab-fa3b-404a-a541-b066fa123b8e,Namespace:calico-system,Attempt:0,}" Jun 25 16:26:49.637343 kubelet[2831]: E0625 16:26:49.637314 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.637544 kubelet[2831]: W0625 16:26:49.637526 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.637666 kubelet[2831]: E0625 16:26:49.637648 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.637991 kubelet[2831]: E0625 16:26:49.637966 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.638095 kubelet[2831]: W0625 16:26:49.638063 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.638095 kubelet[2831]: E0625 16:26:49.638095 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.638391 kubelet[2831]: E0625 16:26:49.638372 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.638391 kubelet[2831]: W0625 16:26:49.638389 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.638540 kubelet[2831]: E0625 16:26:49.638410 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.638659 kubelet[2831]: E0625 16:26:49.638643 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.638659 kubelet[2831]: W0625 16:26:49.638659 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.638779 kubelet[2831]: E0625 16:26:49.638687 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.638992 kubelet[2831]: E0625 16:26:49.638970 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.639070 kubelet[2831]: W0625 16:26:49.638995 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.639124 kubelet[2831]: E0625 16:26:49.639098 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.639282 kubelet[2831]: E0625 16:26:49.639265 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.639282 kubelet[2831]: W0625 16:26:49.639281 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.639403 kubelet[2831]: E0625 16:26:49.639298 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.639584 kubelet[2831]: E0625 16:26:49.639560 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.639584 kubelet[2831]: W0625 16:26:49.639583 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.639723 kubelet[2831]: E0625 16:26:49.639612 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.639884 kubelet[2831]: E0625 16:26:49.639865 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.639956 kubelet[2831]: W0625 16:26:49.639887 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.639956 kubelet[2831]: E0625 16:26:49.639908 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.640174 kubelet[2831]: E0625 16:26:49.640156 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.640248 kubelet[2831]: W0625 16:26:49.640178 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.640248 kubelet[2831]: E0625 16:26:49.640200 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.640612 kubelet[2831]: E0625 16:26:49.640443 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.640612 kubelet[2831]: W0625 16:26:49.640460 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.640612 kubelet[2831]: E0625 16:26:49.640486 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.640783 kubelet[2831]: E0625 16:26:49.640713 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.640783 kubelet[2831]: W0625 16:26:49.640730 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.640882 kubelet[2831]: E0625 16:26:49.640823 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.641011 kubelet[2831]: E0625 16:26:49.640967 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.641011 kubelet[2831]: W0625 16:26:49.640980 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.641128 kubelet[2831]: E0625 16:26:49.641071 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.641251 kubelet[2831]: E0625 16:26:49.641219 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.641251 kubelet[2831]: W0625 16:26:49.641229 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.641360 kubelet[2831]: E0625 16:26:49.641320 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.641475 kubelet[2831]: E0625 16:26:49.641454 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.641475 kubelet[2831]: W0625 16:26:49.641473 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.641577 kubelet[2831]: E0625 16:26:49.641565 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.641749 kubelet[2831]: E0625 16:26:49.641732 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.641749 kubelet[2831]: W0625 16:26:49.641750 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.641865 kubelet[2831]: E0625 16:26:49.641770 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.642023 kubelet[2831]: E0625 16:26:49.642006 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.642023 kubelet[2831]: W0625 16:26:49.642022 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.642131 kubelet[2831]: E0625 16:26:49.642044 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.642484 kubelet[2831]: E0625 16:26:49.642464 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.642484 kubelet[2831]: W0625 16:26:49.642482 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.642627 kubelet[2831]: E0625 16:26:49.642503 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.644228 kubelet[2831]: E0625 16:26:49.643986 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.644228 kubelet[2831]: W0625 16:26:49.644004 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.644228 kubelet[2831]: E0625 16:26:49.644101 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.644228 kubelet[2831]: E0625 16:26:49.644239 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.644228 kubelet[2831]: W0625 16:26:49.644248 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.644546 kubelet[2831]: E0625 16:26:49.644344 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.644546 kubelet[2831]: E0625 16:26:49.644462 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.644546 kubelet[2831]: W0625 16:26:49.644471 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.644706 kubelet[2831]: E0625 16:26:49.644595 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.644761 kubelet[2831]: E0625 16:26:49.644719 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.644761 kubelet[2831]: W0625 16:26:49.644728 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.644761 kubelet[2831]: E0625 16:26:49.644747 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.645309 kubelet[2831]: E0625 16:26:49.644934 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.645309 kubelet[2831]: W0625 16:26:49.644946 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.645309 kubelet[2831]: E0625 16:26:49.644964 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.645309 kubelet[2831]: E0625 16:26:49.645210 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.645309 kubelet[2831]: W0625 16:26:49.645221 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.645309 kubelet[2831]: E0625 16:26:49.645239 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.645777 kubelet[2831]: E0625 16:26:49.645435 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.645777 kubelet[2831]: W0625 16:26:49.645448 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.645777 kubelet[2831]: E0625 16:26:49.645464 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.645907 kubelet[2831]: E0625 16:26:49.645818 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.645907 kubelet[2831]: W0625 16:26:49.645828 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.645907 kubelet[2831]: E0625 16:26:49.645844 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.660322 kubelet[2831]: E0625 16:26:49.660292 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:49.660322 kubelet[2831]: W0625 16:26:49.660321 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:49.660488 kubelet[2831]: E0625 16:26:49.660343 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:49.690027 containerd[1477]: time="2024-06-25T16:26:49.689680731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:49.690027 containerd[1477]: time="2024-06-25T16:26:49.689748528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:49.690027 containerd[1477]: time="2024-06-25T16:26:49.689771627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:49.690027 containerd[1477]: time="2024-06-25T16:26:49.689791027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:49.697480 containerd[1477]: time="2024-06-25T16:26:49.697433252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kgdg,Uid:b4146ffb-43cc-4c81-84c8-6e23adccb5cb,Namespace:calico-system,Attempt:0,}" Jun 25 16:26:49.725804 systemd[1]: Started cri-containerd-79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1.scope - libcontainer container 79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1. Jun 25 16:26:49.750653 containerd[1477]: time="2024-06-25T16:26:49.750551143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:49.750653 containerd[1477]: time="2024-06-25T16:26:49.750618541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:49.750935 containerd[1477]: time="2024-06-25T16:26:49.750895531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:49.751035 containerd[1477]: time="2024-06-25T16:26:49.750917030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:49.766000 audit: BPF prog-id=145 op=LOAD Jun 25 16:26:49.767000 audit: BPF prog-id=146 op=LOAD Jun 25 16:26:49.767000 audit[3248]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3238 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666638653332643265356163346432393639373831383561636630 Jun 25 16:26:49.767000 audit: BPF prog-id=147 op=LOAD Jun 25 16:26:49.767000 audit[3248]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3238 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666638653332643265356163346432393639373831383561636630 Jun 25 16:26:49.767000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:26:49.767000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:26:49.767000 audit: BPF prog-id=148 op=LOAD Jun 25 16:26:49.767000 audit[3248]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3238 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666638653332643265356163346432393639373831383561636630 Jun 25 16:26:49.787792 systemd[1]: Started cri-containerd-317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049.scope - libcontainer container 317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049. Jun 25 16:26:49.835756 containerd[1477]: time="2024-06-25T16:26:49.835710383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ffb96dddb-sm2l2,Uid:b019f5ab-fa3b-404a-a541-b066fa123b8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\"" Jun 25 16:26:49.839871 containerd[1477]: time="2024-06-25T16:26:49.839830035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:26:49.842000 audit: BPF prog-id=149 op=LOAD Jun 25 16:26:49.843000 audit: BPF prog-id=150 op=LOAD Jun 25 16:26:49.843000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3274 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376237343333333862643661666238633963353933623632366238 Jun 25 16:26:49.843000 audit: BPF prog-id=151 op=LOAD Jun 25 16:26:49.843000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3274 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376237343333333862643661666238633963353933623632366238 Jun 25 16:26:49.843000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:26:49.843000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:26:49.843000 audit: BPF prog-id=152 op=LOAD Jun 25 16:26:49.843000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3274 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:49.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376237343333333862643661666238633963353933623632366238 Jun 25 16:26:49.859370 containerd[1477]: time="2024-06-25T16:26:49.859285036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kgdg,Uid:b4146ffb-43cc-4c81-84c8-6e23adccb5cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\"" Jun 25 16:26:50.284000 audit[3314]: NETFILTER_CFG table=filter:97 family=2 entries=16 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:50.284000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe04fce8a0 a2=0 a3=7ffe04fce88c items=0 ppid=2968 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:50.285000 audit[3314]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:50.285000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe04fce8a0 a2=0 a3=0 items=0 ppid=2968 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:50.597245 kubelet[2831]: E0625 16:26:50.596271 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:52.597025 kubelet[2831]: E0625 16:26:52.596980 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:53.618681 containerd[1477]: time="2024-06-25T16:26:53.618635647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.637763 containerd[1477]: time="2024-06-25T16:26:53.637693312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:26:53.641016 containerd[1477]: time="2024-06-25T16:26:53.640967503Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.645305 containerd[1477]: time="2024-06-25T16:26:53.645266659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.650155 containerd[1477]: time="2024-06-25T16:26:53.650122098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:53.651102 containerd[1477]: time="2024-06-25T16:26:53.650803875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.810708449s" Jun 25 16:26:53.651218 containerd[1477]: time="2024-06-25T16:26:53.651106365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:26:53.652461 containerd[1477]: time="2024-06-25T16:26:53.652413421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:26:53.669400 containerd[1477]: time="2024-06-25T16:26:53.669361556Z" level=info msg="CreateContainer within sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:26:53.701278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697453153.mount: Deactivated successfully. Jun 25 16:26:53.715733 containerd[1477]: time="2024-06-25T16:26:53.715689312Z" level=info msg="CreateContainer within sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\"" Jun 25 16:26:53.717974 containerd[1477]: time="2024-06-25T16:26:53.716389788Z" level=info msg="StartContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\"" Jun 25 16:26:53.745755 systemd[1]: Started cri-containerd-b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f.scope - libcontainer container b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f. Jun 25 16:26:53.759000 audit: BPF prog-id=153 op=LOAD Jun 25 16:26:53.760000 audit: BPF prog-id=154 op=LOAD Jun 25 16:26:53.760000 audit[3330]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3238 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:53.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239366435343637636431326532316463613837306164343565316235 Jun 25 16:26:53.760000 audit: BPF prog-id=155 op=LOAD Jun 25 16:26:53.760000 audit[3330]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3238 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:53.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239366435343637636431326532316463613837306164343565316235 Jun 25 16:26:53.760000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:26:53.760000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:26:53.760000 audit: BPF prog-id=156 op=LOAD Jun 25 16:26:53.760000 audit[3330]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3238 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:53.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239366435343637636431326532316463613837306164343565316235 Jun 25 16:26:53.794156 containerd[1477]: time="2024-06-25T16:26:53.794113897Z" level=info msg="StartContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" returns successfully" Jun 25 16:26:54.597247 kubelet[2831]: E0625 16:26:54.596739 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:54.678377 containerd[1477]: time="2024-06-25T16:26:54.678322424Z" level=info msg="StopContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" with timeout 300 (s)" Jun 25 16:26:54.679063 containerd[1477]: time="2024-06-25T16:26:54.679027501Z" level=info msg="Stop container \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" with signal terminated" Jun 25 16:26:54.727783 systemd[1]: cri-containerd-b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f.scope: Deactivated successfully. Jun 25 16:26:54.735274 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:26:54.735391 kernel: audit: type=1334 audit(1719332814.726:499): prog-id=153 op=UNLOAD Jun 25 16:26:54.726000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:26:54.735000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:26:54.740650 kernel: audit: type=1334 audit(1719332814.735:500): prog-id=156 op=UNLOAD Jun 25 16:26:54.774493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f-rootfs.mount: Deactivated successfully. Jun 25 16:26:56.596892 kubelet[2831]: E0625 16:26:56.596841 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:57.602688 containerd[1477]: time="2024-06-25T16:26:57.602622819Z" level=info msg="shim disconnected" id=b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f namespace=k8s.io Jun 25 16:26:57.602688 containerd[1477]: time="2024-06-25T16:26:57.602685717Z" level=warning msg="cleaning up after shim disconnected" id=b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f namespace=k8s.io Jun 25 16:26:57.603183 containerd[1477]: time="2024-06-25T16:26:57.602696817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:57.620802 containerd[1477]: time="2024-06-25T16:26:57.620757056Z" level=info msg="StopContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" returns successfully" Jun 25 16:26:57.621389 containerd[1477]: time="2024-06-25T16:26:57.621348737Z" level=info msg="StopPodSandbox for \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\"" Jun 25 16:26:57.621521 containerd[1477]: time="2024-06-25T16:26:57.621436435Z" level=info msg="Container to stop \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:26:57.624934 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1-shm.mount: Deactivated successfully. Jun 25 16:26:57.635210 systemd[1]: cri-containerd-79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1.scope: Deactivated successfully. Jun 25 16:26:57.633000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:26:57.647611 kernel: audit: type=1334 audit(1719332817.633:501): prog-id=145 op=UNLOAD Jun 25 16:26:57.648000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:26:57.654617 kernel: audit: type=1334 audit(1719332817.648:502): prog-id=148 op=UNLOAD Jun 25 16:26:57.677333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1-rootfs.mount: Deactivated successfully. Jun 25 16:26:57.692424 containerd[1477]: time="2024-06-25T16:26:57.692360232Z" level=info msg="shim disconnected" id=79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1 namespace=k8s.io Jun 25 16:26:57.692676 containerd[1477]: time="2024-06-25T16:26:57.692449529Z" level=warning msg="cleaning up after shim disconnected" id=79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1 namespace=k8s.io Jun 25 16:26:57.692676 containerd[1477]: time="2024-06-25T16:26:57.692461629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:57.712813 containerd[1477]: time="2024-06-25T16:26:57.712774098Z" level=info msg="TearDown network for sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" successfully" Jun 25 16:26:57.712813 containerd[1477]: time="2024-06-25T16:26:57.712806897Z" level=info msg="StopPodSandbox for \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" returns successfully" Jun 25 16:26:57.731193 kubelet[2831]: I0625 16:26:57.731164 2831 topology_manager.go:215] "Topology Admit Handler" podUID="62281b2f-9443-4b56-bf3c-299009f51267" podNamespace="calico-system" podName="calico-typha-7d7db5d7b8-f9rtf" Jun 25 16:26:57.732255 kubelet[2831]: E0625 16:26:57.732232 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b019f5ab-fa3b-404a-a541-b066fa123b8e" containerName="calico-typha" Jun 25 16:26:57.732426 kubelet[2831]: I0625 16:26:57.732412 2831 memory_manager.go:346] "RemoveStaleState removing state" podUID="b019f5ab-fa3b-404a-a541-b066fa123b8e" containerName="calico-typha" Jun 25 16:26:57.740438 systemd[1]: Created slice kubepods-besteffort-pod62281b2f_9443_4b56_bf3c_299009f51267.slice - libcontainer container kubepods-besteffort-pod62281b2f_9443_4b56_bf3c_299009f51267.slice. Jun 25 16:26:57.744000 audit[3428]: NETFILTER_CFG table=filter:99 family=2 entries=16 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.744000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdc39755b0 a2=0 a3=7ffdc397559c items=0 ppid=2968 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.764191 kernel: audit: type=1325 audit(1719332817.744:503): table=filter:99 family=2 entries=16 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.764317 kernel: audit: type=1300 audit(1719332817.744:503): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdc39755b0 a2=0 a3=7ffdc397559c items=0 ppid=2968 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.765618 kernel: audit: type=1327 audit(1719332817.744:503): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.744000 audit[3428]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.776577 kernel: audit: type=1325 audit(1719332817.744:504): table=nat:100 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.744000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc39755b0 a2=0 a3=0 items=0 ppid=2968 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.781127 kubelet[2831]: E0625 16:26:57.781107 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.781263 kubelet[2831]: W0625 16:26:57.781246 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.781339 kubelet[2831]: E0625 16:26:57.781330 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.781619 kubelet[2831]: E0625 16:26:57.781607 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.781708 kubelet[2831]: W0625 16:26:57.781698 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.781775 kubelet[2831]: E0625 16:26:57.781767 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.782027 kubelet[2831]: E0625 16:26:57.782017 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.782111 kubelet[2831]: W0625 16:26:57.782103 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.782173 kubelet[2831]: E0625 16:26:57.782160 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.782418 kubelet[2831]: E0625 16:26:57.782409 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.782496 kubelet[2831]: W0625 16:26:57.782487 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.782558 kubelet[2831]: E0625 16:26:57.782545 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.782816 kubelet[2831]: E0625 16:26:57.782808 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.782902 kubelet[2831]: W0625 16:26:57.782893 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.782966 kubelet[2831]: E0625 16:26:57.782952 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.783209 kubelet[2831]: E0625 16:26:57.783200 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.783286 kubelet[2831]: W0625 16:26:57.783278 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.783348 kubelet[2831]: E0625 16:26:57.783336 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.783935 kubelet[2831]: E0625 16:26:57.783918 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.784071 kubelet[2831]: W0625 16:26:57.784061 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.784142 kubelet[2831]: E0625 16:26:57.784134 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.784444 kubelet[2831]: E0625 16:26:57.784433 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.784524 kubelet[2831]: W0625 16:26:57.784514 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.784603 kubelet[2831]: E0625 16:26:57.784582 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.784858 kubelet[2831]: E0625 16:26:57.784849 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.784942 kubelet[2831]: W0625 16:26:57.784933 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.785009 kubelet[2831]: E0625 16:26:57.785002 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.785236 kubelet[2831]: E0625 16:26:57.785227 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.785310 kubelet[2831]: W0625 16:26:57.785302 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.785372 kubelet[2831]: E0625 16:26:57.785359 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.785606 kubelet[2831]: E0625 16:26:57.785579 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.785688 kubelet[2831]: W0625 16:26:57.785679 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.785746 kubelet[2831]: E0625 16:26:57.785740 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.785972 kubelet[2831]: E0625 16:26:57.785964 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.786045 kubelet[2831]: W0625 16:26:57.786036 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.786102 kubelet[2831]: E0625 16:26:57.786095 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.787717 kernel: audit: type=1300 audit(1719332817.744:504): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc39755b0 a2=0 a3=0 items=0 ppid=2968 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.794360 kernel: audit: type=1327 audit(1719332817.744:504): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.762000 audit[3430]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=3430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.762000 audit[3430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeb44f6260 a2=0 a3=7ffeb44f624c items=0 ppid=2968 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.795778 kubelet[2831]: E0625 16:26:57.795759 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.795778 kubelet[2831]: W0625 16:26:57.795774 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.795921 kubelet[2831]: E0625 16:26:57.795797 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.795921 kubelet[2831]: I0625 16:26:57.795840 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b019f5ab-fa3b-404a-a541-b066fa123b8e-tigera-ca-bundle\") pod \"b019f5ab-fa3b-404a-a541-b066fa123b8e\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " Jun 25 16:26:57.796061 kubelet[2831]: E0625 16:26:57.796043 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.796135 kubelet[2831]: W0625 16:26:57.796060 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.796135 kubelet[2831]: E0625 16:26:57.796088 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.796135 kubelet[2831]: I0625 16:26:57.796122 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b019f5ab-fa3b-404a-a541-b066fa123b8e-typha-certs\") pod \"b019f5ab-fa3b-404a-a541-b066fa123b8e\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " Jun 25 16:26:57.796439 kubelet[2831]: E0625 16:26:57.796417 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.796546 kubelet[2831]: W0625 16:26:57.796532 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.796651 kubelet[2831]: E0625 16:26:57.796639 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.796775 kubelet[2831]: I0625 16:26:57.796764 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdpnc\" (UniqueName: \"kubernetes.io/projected/b019f5ab-fa3b-404a-a541-b066fa123b8e-kube-api-access-qdpnc\") pod \"b019f5ab-fa3b-404a-a541-b066fa123b8e\" (UID: \"b019f5ab-fa3b-404a-a541-b066fa123b8e\") " Jun 25 16:26:57.797115 kubelet[2831]: E0625 16:26:57.797100 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.797222 kubelet[2831]: W0625 16:26:57.797209 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.797320 kubelet[2831]: E0625 16:26:57.797309 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.797434 kubelet[2831]: I0625 16:26:57.797423 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62281b2f-9443-4b56-bf3c-299009f51267-typha-certs\") pod \"calico-typha-7d7db5d7b8-f9rtf\" (UID: \"62281b2f-9443-4b56-bf3c-299009f51267\") " pod="calico-system/calico-typha-7d7db5d7b8-f9rtf" Jun 25 16:26:57.797778 kubelet[2831]: E0625 16:26:57.797755 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.797889 kubelet[2831]: W0625 16:26:57.797874 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.797999 kubelet[2831]: E0625 16:26:57.797986 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.798539 kubelet[2831]: E0625 16:26:57.798524 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.798665 kubelet[2831]: W0625 16:26:57.798651 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.798773 kubelet[2831]: E0625 16:26:57.798761 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.798884 kubelet[2831]: I0625 16:26:57.798872 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62281b2f-9443-4b56-bf3c-299009f51267-tigera-ca-bundle\") pod \"calico-typha-7d7db5d7b8-f9rtf\" (UID: \"62281b2f-9443-4b56-bf3c-299009f51267\") " pod="calico-system/calico-typha-7d7db5d7b8-f9rtf" Jun 25 16:26:57.799231 kubelet[2831]: E0625 16:26:57.799206 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.799348 kubelet[2831]: W0625 16:26:57.799333 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.799455 kubelet[2831]: E0625 16:26:57.799444 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.799571 kubelet[2831]: I0625 16:26:57.799559 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdfm\" (UniqueName: \"kubernetes.io/projected/62281b2f-9443-4b56-bf3c-299009f51267-kube-api-access-hbdfm\") pod \"calico-typha-7d7db5d7b8-f9rtf\" (UID: \"62281b2f-9443-4b56-bf3c-299009f51267\") " pod="calico-system/calico-typha-7d7db5d7b8-f9rtf" Jun 25 16:26:57.788000 audit[3430]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=3430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:57.788000 audit[3430]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb44f6260 a2=0 a3=0 items=0 ppid=2968 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:57.800156 kubelet[2831]: E0625 16:26:57.800141 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.800258 kubelet[2831]: W0625 16:26:57.800246 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.800353 kubelet[2831]: E0625 16:26:57.800343 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.800690 kubelet[2831]: E0625 16:26:57.800676 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.800822 kubelet[2831]: W0625 16:26:57.800809 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.800931 kubelet[2831]: E0625 16:26:57.800920 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.801314 kubelet[2831]: E0625 16:26:57.801299 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.801426 kubelet[2831]: W0625 16:26:57.801409 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.801528 kubelet[2831]: E0625 16:26:57.801517 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.804537 systemd[1]: var-lib-kubelet-pods-b019f5ab\x2dfa3b\x2d404a\x2da541\x2db066fa123b8e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 16:26:57.808964 systemd[1]: var-lib-kubelet-pods-b019f5ab\x2dfa3b\x2d404a\x2da541\x2db066fa123b8e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 16:26:57.809918 kubelet[2831]: E0625 16:26:57.809684 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.809918 kubelet[2831]: W0625 16:26:57.809697 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.809918 kubelet[2831]: E0625 16:26:57.809733 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.810066 kubelet[2831]: E0625 16:26:57.809985 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.810066 kubelet[2831]: W0625 16:26:57.809996 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.810066 kubelet[2831]: E0625 16:26:57.810044 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.811216 kubelet[2831]: I0625 16:26:57.811192 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b019f5ab-fa3b-404a-a541-b066fa123b8e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "b019f5ab-fa3b-404a-a541-b066fa123b8e" (UID: "b019f5ab-fa3b-404a-a541-b066fa123b8e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:26:57.815176 systemd[1]: var-lib-kubelet-pods-b019f5ab\x2dfa3b\x2d404a\x2da541\x2db066fa123b8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdpnc.mount: Deactivated successfully. Jun 25 16:26:57.816679 kubelet[2831]: I0625 16:26:57.816206 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b019f5ab-fa3b-404a-a541-b066fa123b8e-kube-api-access-qdpnc" (OuterVolumeSpecName: "kube-api-access-qdpnc") pod "b019f5ab-fa3b-404a-a541-b066fa123b8e" (UID: "b019f5ab-fa3b-404a-a541-b066fa123b8e"). InnerVolumeSpecName "kube-api-access-qdpnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:26:57.816822 kubelet[2831]: E0625 16:26:57.816810 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.816922 kubelet[2831]: W0625 16:26:57.816894 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.816989 kubelet[2831]: E0625 16:26:57.816937 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.817186 kubelet[2831]: E0625 16:26:57.817169 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.817186 kubelet[2831]: W0625 16:26:57.817184 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.817316 kubelet[2831]: E0625 16:26:57.817277 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.817471 kubelet[2831]: E0625 16:26:57.817456 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.817471 kubelet[2831]: W0625 16:26:57.817468 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.817627 kubelet[2831]: E0625 16:26:57.817493 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.817881 kubelet[2831]: I0625 16:26:57.817854 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b019f5ab-fa3b-404a-a541-b066fa123b8e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b019f5ab-fa3b-404a-a541-b066fa123b8e" (UID: "b019f5ab-fa3b-404a-a541-b066fa123b8e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:26:57.901040 kubelet[2831]: E0625 16:26:57.901011 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.901040 kubelet[2831]: W0625 16:26:57.901032 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.901419 kubelet[2831]: E0625 16:26:57.901059 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.901419 kubelet[2831]: E0625 16:26:57.901331 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.901419 kubelet[2831]: W0625 16:26:57.901344 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.901419 kubelet[2831]: E0625 16:26:57.901363 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.901641 kubelet[2831]: E0625 16:26:57.901560 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.901641 kubelet[2831]: W0625 16:26:57.901569 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.901641 kubelet[2831]: E0625 16:26:57.901585 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.901787 kubelet[2831]: I0625 16:26:57.901660 2831 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b019f5ab-fa3b-404a-a541-b066fa123b8e-tigera-ca-bundle\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:26:57.901787 kubelet[2831]: I0625 16:26:57.901677 2831 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qdpnc\" (UniqueName: \"kubernetes.io/projected/b019f5ab-fa3b-404a-a541-b066fa123b8e-kube-api-access-qdpnc\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:26:57.901787 kubelet[2831]: I0625 16:26:57.901692 2831 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b019f5ab-fa3b-404a-a541-b066fa123b8e-typha-certs\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:26:57.902042 kubelet[2831]: E0625 16:26:57.901895 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.902042 kubelet[2831]: W0625 16:26:57.901905 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.902042 kubelet[2831]: E0625 16:26:57.901924 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.902197 kubelet[2831]: E0625 16:26:57.902100 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.902197 kubelet[2831]: W0625 16:26:57.902111 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.902197 kubelet[2831]: E0625 16:26:57.902126 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.902368 kubelet[2831]: E0625 16:26:57.902281 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.902368 kubelet[2831]: W0625 16:26:57.902289 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.902368 kubelet[2831]: E0625 16:26:57.902302 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.902520 kubelet[2831]: E0625 16:26:57.902497 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.902520 kubelet[2831]: W0625 16:26:57.902506 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.902615 kubelet[2831]: E0625 16:26:57.902523 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.902956 kubelet[2831]: E0625 16:26:57.902941 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.903079 kubelet[2831]: W0625 16:26:57.903065 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.903169 kubelet[2831]: E0625 16:26:57.903160 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.903442 kubelet[2831]: E0625 16:26:57.903431 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.903537 kubelet[2831]: W0625 16:26:57.903526 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.903634 kubelet[2831]: E0625 16:26:57.903625 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.903926 kubelet[2831]: E0625 16:26:57.903912 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.904048 kubelet[2831]: W0625 16:26:57.904036 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.904144 kubelet[2831]: E0625 16:26:57.904134 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.904463 kubelet[2831]: E0625 16:26:57.904449 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.904617 kubelet[2831]: W0625 16:26:57.904576 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.904697 kubelet[2831]: E0625 16:26:57.904683 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.904894 kubelet[2831]: E0625 16:26:57.904882 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.905008 kubelet[2831]: W0625 16:26:57.904988 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.905083 kubelet[2831]: E0625 16:26:57.905016 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.905354 kubelet[2831]: E0625 16:26:57.905340 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.905460 kubelet[2831]: W0625 16:26:57.905446 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.905549 kubelet[2831]: E0625 16:26:57.905533 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.905871 kubelet[2831]: E0625 16:26:57.905857 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.905986 kubelet[2831]: W0625 16:26:57.905972 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.906288 kubelet[2831]: E0625 16:26:57.906276 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.906389 kubelet[2831]: W0625 16:26:57.906379 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.906468 kubelet[2831]: E0625 16:26:57.906460 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.907170 kubelet[2831]: E0625 16:26:57.907157 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.909426 kubelet[2831]: E0625 16:26:57.909402 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.909547 kubelet[2831]: W0625 16:26:57.909533 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.909692 kubelet[2831]: E0625 16:26:57.909680 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.911946 kubelet[2831]: E0625 16:26:57.911931 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.912070 kubelet[2831]: W0625 16:26:57.912056 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.912166 kubelet[2831]: E0625 16:26:57.912156 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:57.912710 kubelet[2831]: E0625 16:26:57.912691 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:57.912710 kubelet[2831]: W0625 16:26:57.912710 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:57.912822 kubelet[2831]: E0625 16:26:57.912728 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.043585 containerd[1477]: time="2024-06-25T16:26:58.043531149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7db5d7b8-f9rtf,Uid:62281b2f-9443-4b56-bf3c-299009f51267,Namespace:calico-system,Attempt:0,}" Jun 25 16:26:58.084187 containerd[1477]: time="2024-06-25T16:26:58.084094111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:58.084959 containerd[1477]: time="2024-06-25T16:26:58.084901286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:58.085088 containerd[1477]: time="2024-06-25T16:26:58.084984784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:58.085088 containerd[1477]: time="2024-06-25T16:26:58.085019983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:58.101769 systemd[1]: Started cri-containerd-f339da3374e87f6c3cba59cc8c9e46102aa0366c866e7bd10c26c3827048d6ca.scope - libcontainer container f339da3374e87f6c3cba59cc8c9e46102aa0366c866e7bd10c26c3827048d6ca. Jun 25 16:26:58.110000 audit: BPF prog-id=157 op=LOAD Jun 25 16:26:58.111000 audit: BPF prog-id=158 op=LOAD Jun 25 16:26:58.111000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3489 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633333964613333373465383766366333636261353963633863396534 Jun 25 16:26:58.111000 audit: BPF prog-id=159 op=LOAD Jun 25 16:26:58.111000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3489 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633333964613333373465383766366333636261353963633863396534 Jun 25 16:26:58.111000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:26:58.111000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:26:58.111000 audit: BPF prog-id=160 op=LOAD Jun 25 16:26:58.111000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3489 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633333964613333373465383766366333636261353963633863396534 Jun 25 16:26:58.150666 containerd[1477]: time="2024-06-25T16:26:58.150629180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7db5d7b8-f9rtf,Uid:62281b2f-9443-4b56-bf3c-299009f51267,Namespace:calico-system,Attempt:0,} returns sandbox id \"f339da3374e87f6c3cba59cc8c9e46102aa0366c866e7bd10c26c3827048d6ca\"" Jun 25 16:26:58.159196 containerd[1477]: time="2024-06-25T16:26:58.159070722Z" level=info msg="CreateContainer within sandbox \"f339da3374e87f6c3cba59cc8c9e46102aa0366c866e7bd10c26c3827048d6ca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:26:58.197902 containerd[1477]: time="2024-06-25T16:26:58.197831339Z" level=info msg="CreateContainer within sandbox \"f339da3374e87f6c3cba59cc8c9e46102aa0366c866e7bd10c26c3827048d6ca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ce269a4aef7ac44956d90c3c19b8b22f8face91a96a8d217d735de20294930ee\"" Jun 25 16:26:58.200387 containerd[1477]: time="2024-06-25T16:26:58.198514418Z" level=info msg="StartContainer for \"ce269a4aef7ac44956d90c3c19b8b22f8face91a96a8d217d735de20294930ee\"" Jun 25 16:26:58.226798 systemd[1]: Started cri-containerd-ce269a4aef7ac44956d90c3c19b8b22f8face91a96a8d217d735de20294930ee.scope - libcontainer container ce269a4aef7ac44956d90c3c19b8b22f8face91a96a8d217d735de20294930ee. Jun 25 16:26:58.258000 audit: BPF prog-id=161 op=LOAD Jun 25 16:26:58.259000 audit: BPF prog-id=162 op=LOAD Jun 25 16:26:58.259000 audit[3531]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3489 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365323639613461656637616334343935366439306333633139623862 Jun 25 16:26:58.259000 audit: BPF prog-id=163 op=LOAD Jun 25 16:26:58.259000 audit[3531]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3489 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365323639613461656637616334343935366439306333633139623862 Jun 25 16:26:58.260000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:26:58.260000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:26:58.260000 audit: BPF prog-id=164 op=LOAD Jun 25 16:26:58.260000 audit[3531]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3489 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365323639613461656637616334343935366439306333633139623862 Jun 25 16:26:58.542334 containerd[1477]: time="2024-06-25T16:26:58.542227326Z" level=info msg="StartContainer for \"ce269a4aef7ac44956d90c3c19b8b22f8face91a96a8d217d735de20294930ee\" returns successfully" Jun 25 16:26:58.597337 kubelet[2831]: E0625 16:26:58.597299 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:26:58.680072 containerd[1477]: time="2024-06-25T16:26:58.680023520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:58.683374 containerd[1477]: time="2024-06-25T16:26:58.683317819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:26:58.688150 containerd[1477]: time="2024-06-25T16:26:58.687525291Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:58.694420 kubelet[2831]: I0625 16:26:58.692380 2831 scope.go:117] "RemoveContainer" containerID="b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f" Jun 25 16:26:58.694420 kubelet[2831]: E0625 16:26:58.694275 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.694420 kubelet[2831]: W0625 16:26:58.694288 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.694420 kubelet[2831]: E0625 16:26:58.694313 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.694722 containerd[1477]: time="2024-06-25T16:26:58.693757801Z" level=info msg="RemoveContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\"" Jun 25 16:26:58.695107 kubelet[2831]: E0625 16:26:58.694952 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.695107 kubelet[2831]: W0625 16:26:58.694966 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.695107 kubelet[2831]: E0625 16:26:58.694986 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.695712 containerd[1477]: time="2024-06-25T16:26:58.695684842Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:58.697330 kubelet[2831]: E0625 16:26:58.697305 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.697330 kubelet[2831]: W0625 16:26:58.697323 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.697517 kubelet[2831]: E0625 16:26:58.697343 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.697701 kubelet[2831]: E0625 16:26:58.697551 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.697701 kubelet[2831]: W0625 16:26:58.697562 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.697701 kubelet[2831]: E0625 16:26:58.697578 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.697831 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.699757 kubelet[2831]: W0625 16:26:58.697842 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.697858 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.698039 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.699757 kubelet[2831]: W0625 16:26:58.698050 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.698066 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.698250 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.699757 kubelet[2831]: W0625 16:26:58.698261 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.698276 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.699757 kubelet[2831]: E0625 16:26:58.698463 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.700245 kubelet[2831]: W0625 16:26:58.698473 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.700245 kubelet[2831]: E0625 16:26:58.698490 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.704216 systemd[1]: Removed slice kubepods-besteffort-podb019f5ab_fa3b_404a_a541_b066fa123b8e.slice - libcontainer container kubepods-besteffort-podb019f5ab_fa3b_404a_a541_b066fa123b8e.slice. Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704505 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706026 kubelet[2831]: W0625 16:26:58.704517 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704534 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704722 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706026 kubelet[2831]: W0625 16:26:58.704732 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704746 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704894 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706026 kubelet[2831]: W0625 16:26:58.704903 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.704950 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706026 kubelet[2831]: E0625 16:26:58.705151 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706481 kubelet[2831]: W0625 16:26:58.705161 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705193 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705454 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706481 kubelet[2831]: W0625 16:26:58.705464 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705480 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705724 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706481 kubelet[2831]: W0625 16:26:58.705735 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705751 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.706481 kubelet[2831]: E0625 16:26:58.705932 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.706481 kubelet[2831]: W0625 16:26:58.705942 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.705957 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706230 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707050 kubelet[2831]: W0625 16:26:58.706240 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706257 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706438 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707050 kubelet[2831]: W0625 16:26:58.706446 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706461 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706690 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707050 kubelet[2831]: W0625 16:26:58.706701 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707050 kubelet[2831]: E0625 16:26:58.706717 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.706918 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707970 kubelet[2831]: W0625 16:26:58.706927 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.706942 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.707106 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707970 kubelet[2831]: W0625 16:26:58.707116 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.707130 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.707291 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.707970 kubelet[2831]: W0625 16:26:58.707300 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.707314 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.707970 kubelet[2831]: E0625 16:26:58.707501 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.708557 kubelet[2831]: W0625 16:26:58.707511 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.708557 kubelet[2831]: E0625 16:26:58.707527 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.708557 kubelet[2831]: E0625 16:26:58.708404 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.708557 kubelet[2831]: W0625 16:26:58.708415 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.708557 kubelet[2831]: E0625 16:26:58.708432 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.708892 kubelet[2831]: E0625 16:26:58.708648 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.708892 kubelet[2831]: W0625 16:26:58.708658 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.708892 kubelet[2831]: E0625 16:26:58.708674 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.708892 kubelet[2831]: E0625 16:26:58.708847 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.708892 kubelet[2831]: W0625 16:26:58.708857 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.708892 kubelet[2831]: E0625 16:26:58.708872 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.709257 kubelet[2831]: E0625 16:26:58.709031 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.709257 kubelet[2831]: W0625 16:26:58.709041 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.709257 kubelet[2831]: E0625 16:26:58.709055 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.709257 kubelet[2831]: E0625 16:26:58.709238 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.709257 kubelet[2831]: W0625 16:26:58.709247 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.709257 kubelet[2831]: E0625 16:26:58.709260 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.709865 kubelet[2831]: E0625 16:26:58.709839 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.709865 kubelet[2831]: W0625 16:26:58.709857 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.709990 kubelet[2831]: E0625 16:26:58.709875 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.710144 kubelet[2831]: E0625 16:26:58.710126 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.710144 kubelet[2831]: W0625 16:26:58.710139 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.710292 kubelet[2831]: E0625 16:26:58.710155 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.710365 kubelet[2831]: E0625 16:26:58.710328 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.710365 kubelet[2831]: W0625 16:26:58.710338 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.710365 kubelet[2831]: E0625 16:26:58.710353 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.710553 kubelet[2831]: E0625 16:26:58.710538 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.710553 kubelet[2831]: W0625 16:26:58.710552 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.710830 kubelet[2831]: E0625 16:26:58.710569 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.710830 kubelet[2831]: E0625 16:26:58.710822 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.710943 kubelet[2831]: W0625 16:26:58.710832 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.710943 kubelet[2831]: E0625 16:26:58.710848 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.711208 kubelet[2831]: E0625 16:26:58.711191 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:26:58.711208 kubelet[2831]: W0625 16:26:58.711204 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:26:58.711320 kubelet[2831]: E0625 16:26:58.711220 2831 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:26:58.714298 containerd[1477]: time="2024-06-25T16:26:58.714224276Z" level=info msg="RemoveContainer for \"b96d5467cd12e21dca870ad45e1b57149006d67f4fd6a6a1ff64692ac20ef13f\" returns successfully" Jun 25 16:26:58.718609 containerd[1477]: time="2024-06-25T16:26:58.718433647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:58.723427 containerd[1477]: time="2024-06-25T16:26:58.720041898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 5.067584579s" Jun 25 16:26:58.723427 containerd[1477]: time="2024-06-25T16:26:58.720086097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:26:58.723571 kubelet[2831]: I0625 16:26:58.722782 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7d7db5d7b8-f9rtf" podStartSLOduration=9.722744116 podCreationTimestamp="2024-06-25 16:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:58.703191413 +0000 UTC m=+31.675859890" watchObservedRunningTime="2024-06-25 16:26:58.722744116 +0000 UTC m=+31.695412593" Jun 25 16:26:58.724977 containerd[1477]: time="2024-06-25T16:26:58.724946049Z" level=info msg="CreateContainer within sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:26:58.768049 containerd[1477]: time="2024-06-25T16:26:58.768002434Z" level=info msg="CreateContainer within sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\"" Jun 25 16:26:58.768559 containerd[1477]: time="2024-06-25T16:26:58.768521018Z" level=info msg="StartContainer for \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\"" Jun 25 16:26:58.802000 audit[3617]: NETFILTER_CFG table=filter:103 family=2 entries=15 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:58.802000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcc6bb91d0 a2=0 a3=7ffcc6bb91bc items=0 ppid=2968 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:58.803000 audit[3617]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:58.803000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcc6bb91d0 a2=0 a3=7ffcc6bb91bc items=0 ppid=2968 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:58.803773 systemd[1]: Started cri-containerd-f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260.scope - libcontainer container f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260. Jun 25 16:26:58.820000 audit: BPF prog-id=165 op=LOAD Jun 25 16:26:58.820000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3274 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632396261666662666335366132663239643333626364393136306363 Jun 25 16:26:58.820000 audit: BPF prog-id=166 op=LOAD Jun 25 16:26:58.820000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3274 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632396261666662666335366132663239643333626364393136306363 Jun 25 16:26:58.820000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:26:58.820000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:26:58.820000 audit: BPF prog-id=167 op=LOAD Jun 25 16:26:58.820000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3274 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632396261666662666335366132663239643333626364393136306363 Jun 25 16:26:58.840227 containerd[1477]: time="2024-06-25T16:26:58.840180231Z" level=info msg="StartContainer for \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\" returns successfully" Jun 25 16:26:58.848898 systemd[1]: cri-containerd-f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260.scope: Deactivated successfully. Jun 25 16:26:58.850000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:26:59.354804 containerd[1477]: time="2024-06-25T16:26:59.354718105Z" level=info msg="shim disconnected" id=f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260 namespace=k8s.io Jun 25 16:26:59.354804 containerd[1477]: time="2024-06-25T16:26:59.354795502Z" level=warning msg="cleaning up after shim disconnected" id=f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260 namespace=k8s.io Jun 25 16:26:59.354804 containerd[1477]: time="2024-06-25T16:26:59.354809802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:59.599285 kubelet[2831]: I0625 16:26:59.599241 2831 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b019f5ab-fa3b-404a-a541-b066fa123b8e" path="/var/lib/kubelet/pods/b019f5ab-fa3b-404a-a541-b066fa123b8e/volumes" Jun 25 16:26:59.625168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260-rootfs.mount: Deactivated successfully. Jun 25 16:26:59.703799 containerd[1477]: time="2024-06-25T16:26:59.697256823Z" level=info msg="StopPodSandbox for \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\"" Jun 25 16:26:59.703799 containerd[1477]: time="2024-06-25T16:26:59.697341921Z" level=info msg="Container to stop \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:26:59.702814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049-shm.mount: Deactivated successfully. Jun 25 16:26:59.718000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:26:59.719417 systemd[1]: cri-containerd-317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049.scope: Deactivated successfully. Jun 25 16:26:59.720000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:26:59.743990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049-rootfs.mount: Deactivated successfully. Jun 25 16:26:59.754253 containerd[1477]: time="2024-06-25T16:26:59.754179415Z" level=info msg="shim disconnected" id=317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049 namespace=k8s.io Jun 25 16:26:59.754495 containerd[1477]: time="2024-06-25T16:26:59.754464506Z" level=warning msg="cleaning up after shim disconnected" id=317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049 namespace=k8s.io Jun 25 16:26:59.754605 containerd[1477]: time="2024-06-25T16:26:59.754494605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:26:59.768809 containerd[1477]: time="2024-06-25T16:26:59.768759077Z" level=info msg="TearDown network for sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" successfully" Jun 25 16:26:59.768809 containerd[1477]: time="2024-06-25T16:26:59.768800376Z" level=info msg="StopPodSandbox for \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" returns successfully" Jun 25 16:26:59.916585 kubelet[2831]: I0625 16:26:59.916533 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-flexvol-driver-host\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916615 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnljn\" (UniqueName: \"kubernetes.io/projected/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-kube-api-access-cnljn\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916651 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-tigera-ca-bundle\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916683 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-lib-calico\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916709 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-lib-modules\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916740 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-node-certs\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.916846 kubelet[2831]: I0625 16:26:59.916768 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-log-dir\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.916801 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-run-calico\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.916833 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-policysync\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.916863 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-net-dir\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.916892 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-xtables-lock\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.916928 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-bin-dir\") pod \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\" (UID: \"b4146ffb-43cc-4c81-84c8-6e23adccb5cb\") " Jun 25 16:26:59.917193 kubelet[2831]: I0625 16:26:59.917015 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918033 kubelet[2831]: I0625 16:26:59.917735 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918033 kubelet[2831]: I0625 16:26:59.917818 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918033 kubelet[2831]: I0625 16:26:59.917889 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918033 kubelet[2831]: I0625 16:26:59.917920 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-policysync" (OuterVolumeSpecName: "policysync") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918033 kubelet[2831]: I0625 16:26:59.917968 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918386 kubelet[2831]: I0625 16:26:59.917998 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918793 kubelet[2831]: I0625 16:26:59.918566 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.918793 kubelet[2831]: I0625 16:26:59.918649 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:26:59.919955 kubelet[2831]: I0625 16:26:59.919910 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:26:59.926254 systemd[1]: var-lib-kubelet-pods-b4146ffb\x2d43cc\x2d4c81\x2d84c8\x2d6e23adccb5cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcnljn.mount: Deactivated successfully. Jun 25 16:26:59.926372 systemd[1]: var-lib-kubelet-pods-b4146ffb\x2d43cc\x2d4c81\x2d84c8\x2d6e23adccb5cb-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:26:59.928185 kubelet[2831]: I0625 16:26:59.928018 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-node-certs" (OuterVolumeSpecName: "node-certs") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:26:59.928185 kubelet[2831]: I0625 16:26:59.928107 2831 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-kube-api-access-cnljn" (OuterVolumeSpecName: "kube-api-access-cnljn") pod "b4146ffb-43cc-4c81-84c8-6e23adccb5cb" (UID: "b4146ffb-43cc-4c81-84c8-6e23adccb5cb"). InnerVolumeSpecName "kube-api-access-cnljn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017402 2831 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-flexvol-driver-host\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017448 2831 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cnljn\" (UniqueName: \"kubernetes.io/projected/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-kube-api-access-cnljn\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017466 2831 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-tigera-ca-bundle\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017481 2831 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-lib-calico\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017498 2831 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-lib-modules\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017639 2831 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-node-certs\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017666 2831 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-log-dir\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.017819 kubelet[2831]: I0625 16:27:00.017683 2831 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-var-run-calico\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.018547 kubelet[2831]: I0625 16:27:00.017698 2831 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-policysync\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.018547 kubelet[2831]: I0625 16:27:00.017713 2831 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-net-dir\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.018547 kubelet[2831]: I0625 16:27:00.017727 2831 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-xtables-lock\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.018547 kubelet[2831]: I0625 16:27:00.017744 2831 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4146ffb-43cc-4c81-84c8-6e23adccb5cb-cni-bin-dir\") on node \"ci-3815.2.4-a-a46e2cd05c\" DevicePath \"\"" Jun 25 16:27:00.597385 kubelet[2831]: E0625 16:27:00.597333 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:00.706671 kubelet[2831]: I0625 16:27:00.706635 2831 scope.go:117] "RemoveContainer" containerID="f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260" Jun 25 16:27:00.712016 containerd[1477]: time="2024-06-25T16:27:00.711653328Z" level=info msg="RemoveContainer for \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\"" Jun 25 16:27:00.714147 systemd[1]: Removed slice kubepods-besteffort-podb4146ffb_43cc_4c81_84c8_6e23adccb5cb.slice - libcontainer container kubepods-besteffort-podb4146ffb_43cc_4c81_84c8_6e23adccb5cb.slice. Jun 25 16:27:00.723010 containerd[1477]: time="2024-06-25T16:27:00.722917596Z" level=info msg="RemoveContainer for \"f29baffbfc56a2f29d33bcd9160cc5f59bf3770c02b494043b5ab907616d6260\" returns successfully" Jun 25 16:27:00.761549 kubelet[2831]: I0625 16:27:00.761511 2831 topology_manager.go:215] "Topology Admit Handler" podUID="b2f0cdf0-4cfe-4637-8043-4a7dffb1f652" podNamespace="calico-system" podName="calico-node-fffvm" Jun 25 16:27:00.761741 kubelet[2831]: E0625 16:27:00.761583 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4146ffb-43cc-4c81-84c8-6e23adccb5cb" containerName="flexvol-driver" Jun 25 16:27:00.761741 kubelet[2831]: I0625 16:27:00.761628 2831 memory_manager.go:346] "RemoveStaleState removing state" podUID="b4146ffb-43cc-4c81-84c8-6e23adccb5cb" containerName="flexvol-driver" Jun 25 16:27:00.768228 systemd[1]: Created slice kubepods-besteffort-podb2f0cdf0_4cfe_4637_8043_4a7dffb1f652.slice - libcontainer container kubepods-besteffort-podb2f0cdf0_4cfe_4637_8043_4a7dffb1f652.slice. Jun 25 16:27:00.924041 kubelet[2831]: I0625 16:27:00.923994 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-lib-modules\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924249 kubelet[2831]: I0625 16:27:00.924050 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-cni-bin-dir\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924249 kubelet[2831]: I0625 16:27:00.924089 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-xtables-lock\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924249 kubelet[2831]: I0625 16:27:00.924122 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-cni-log-dir\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924249 kubelet[2831]: I0625 16:27:00.924155 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-flexvol-driver-host\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924249 kubelet[2831]: I0625 16:27:00.924188 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-var-run-calico\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924540 kubelet[2831]: I0625 16:27:00.924221 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-node-certs\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924540 kubelet[2831]: I0625 16:27:00.924255 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-policysync\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924540 kubelet[2831]: I0625 16:27:00.924309 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-tigera-ca-bundle\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924540 kubelet[2831]: I0625 16:27:00.924352 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-var-lib-calico\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924540 kubelet[2831]: I0625 16:27:00.924390 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-cni-net-dir\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:00.924885 kubelet[2831]: I0625 16:27:00.924432 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82fk\" (UniqueName: \"kubernetes.io/projected/b2f0cdf0-4cfe-4637-8043-4a7dffb1f652-kube-api-access-t82fk\") pod \"calico-node-fffvm\" (UID: \"b2f0cdf0-4cfe-4637-8043-4a7dffb1f652\") " pod="calico-system/calico-node-fffvm" Jun 25 16:27:01.073514 containerd[1477]: time="2024-06-25T16:27:01.073455783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fffvm,Uid:b2f0cdf0-4cfe-4637-8043-4a7dffb1f652,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:01.121407 containerd[1477]: time="2024-06-25T16:27:01.121270894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:01.121407 containerd[1477]: time="2024-06-25T16:27:01.121326593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:01.122619 containerd[1477]: time="2024-06-25T16:27:01.121351892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:01.122619 containerd[1477]: time="2024-06-25T16:27:01.121735981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:01.149764 systemd[1]: Started cri-containerd-2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a.scope - libcontainer container 2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a. Jun 25 16:27:01.159000 audit: BPF prog-id=168 op=LOAD Jun 25 16:27:01.162436 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 16:27:01.162536 kernel: audit: type=1334 audit(1719332821.159:529): prog-id=168 op=LOAD Jun 25 16:27:01.159000 audit: BPF prog-id=169 op=LOAD Jun 25 16:27:01.167769 kernel: audit: type=1334 audit(1719332821.159:530): prog-id=169 op=LOAD Jun 25 16:27:01.159000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3713 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.177562 kernel: audit: type=1300 audit(1719332821.159:530): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3713 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336331656635313531303037653836666231316362646233343164 Jun 25 16:27:01.188726 kernel: audit: type=1327 audit(1719332821.159:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336331656635313531303037653836666231316362646233343164 Jun 25 16:27:01.159000 audit: BPF prog-id=170 op=LOAD Jun 25 16:27:01.193198 kernel: audit: type=1334 audit(1719332821.159:531): prog-id=170 op=LOAD Jun 25 16:27:01.159000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3713 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.205585 kernel: audit: type=1300 audit(1719332821.159:531): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3713 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.205703 containerd[1477]: time="2024-06-25T16:27:01.203531705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fffvm,Uid:b2f0cdf0-4cfe-4637-8043-4a7dffb1f652,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\"" Jun 25 16:27:01.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336331656635313531303037653836666231316362646233343164 Jun 25 16:27:01.217543 containerd[1477]: time="2024-06-25T16:27:01.213041329Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:27:01.217736 kernel: audit: type=1327 audit(1719332821.159:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336331656635313531303037653836666231316362646233343164 Jun 25 16:27:01.159000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:27:01.159000 audit: BPF prog-id=169 op=UNLOAD Jun 25 16:27:01.223113 kernel: audit: type=1334 audit(1719332821.159:532): prog-id=170 op=UNLOAD Jun 25 16:27:01.223182 kernel: audit: type=1334 audit(1719332821.159:533): prog-id=169 op=UNLOAD Jun 25 16:27:01.159000 audit: BPF prog-id=171 op=LOAD Jun 25 16:27:01.226203 kernel: audit: type=1334 audit(1719332821.159:534): prog-id=171 op=LOAD Jun 25 16:27:01.159000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3713 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336331656635313531303037653836666231316362646233343164 Jun 25 16:27:01.258925 containerd[1477]: time="2024-06-25T16:27:01.258872598Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6\"" Jun 25 16:27:01.259544 containerd[1477]: time="2024-06-25T16:27:01.259517179Z" level=info msg="StartContainer for \"4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6\"" Jun 25 16:27:01.289751 systemd[1]: Started cri-containerd-4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6.scope - libcontainer container 4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6. Jun 25 16:27:01.300000 audit: BPF prog-id=172 op=LOAD Jun 25 16:27:01.300000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3713 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363865353033333562643332363136396332643435633763313561 Jun 25 16:27:01.300000 audit: BPF prog-id=173 op=LOAD Jun 25 16:27:01.300000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3713 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363865353033333562643332363136396332643435633763313561 Jun 25 16:27:01.300000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:27:01.300000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:27:01.300000 audit: BPF prog-id=174 op=LOAD Jun 25 16:27:01.300000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3713 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:01.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363865353033333562643332363136396332643435633763313561 Jun 25 16:27:01.322629 containerd[1477]: time="2024-06-25T16:27:01.319959024Z" level=info msg="StartContainer for \"4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6\" returns successfully" Jun 25 16:27:01.331170 systemd[1]: cri-containerd-4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6.scope: Deactivated successfully. Jun 25 16:27:01.334000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:27:01.450359 containerd[1477]: time="2024-06-25T16:27:01.448705285Z" level=info msg="shim disconnected" id=4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6 namespace=k8s.io Jun 25 16:27:01.450359 containerd[1477]: time="2024-06-25T16:27:01.448767484Z" level=warning msg="cleaning up after shim disconnected" id=4268e50335bd326169c2d45c7c15a1f28ab806700433fdaf447d36bc737ae5f6 namespace=k8s.io Jun 25 16:27:01.450359 containerd[1477]: time="2024-06-25T16:27:01.448778783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:01.600311 kubelet[2831]: I0625 16:27:01.600276 2831 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b4146ffb-43cc-4c81-84c8-6e23adccb5cb" path="/var/lib/kubelet/pods/b4146ffb-43cc-4c81-84c8-6e23adccb5cb/volumes" Jun 25 16:27:01.712529 containerd[1477]: time="2024-06-25T16:27:01.711915242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:27:02.597399 kubelet[2831]: E0625 16:27:02.597347 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:04.596422 kubelet[2831]: E0625 16:27:04.596370 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:06.597034 kubelet[2831]: E0625 16:27:06.596994 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:08.598309 kubelet[2831]: E0625 16:27:08.597063 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:10.596980 kubelet[2831]: E0625 16:27:10.596936 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:11.917960 containerd[1477]: time="2024-06-25T16:27:11.917909820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:11.920755 containerd[1477]: time="2024-06-25T16:27:11.920700250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:27:11.923348 containerd[1477]: time="2024-06-25T16:27:11.923312885Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:11.927461 containerd[1477]: time="2024-06-25T16:27:11.927428182Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:11.930572 containerd[1477]: time="2024-06-25T16:27:11.930538204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:11.931268 containerd[1477]: time="2024-06-25T16:27:11.931233787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 10.219265146s" Jun 25 16:27:11.931393 containerd[1477]: time="2024-06-25T16:27:11.931368183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:27:11.933564 containerd[1477]: time="2024-06-25T16:27:11.933486630Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:27:11.974915 containerd[1477]: time="2024-06-25T16:27:11.974866896Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672\"" Jun 25 16:27:11.975553 containerd[1477]: time="2024-06-25T16:27:11.975310585Z" level=info msg="StartContainer for \"b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672\"" Jun 25 16:27:12.009746 systemd[1]: Started cri-containerd-b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672.scope - libcontainer container b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672. Jun 25 16:27:12.021000 audit: BPF prog-id=175 op=LOAD Jun 25 16:27:12.025706 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:27:12.025871 kernel: audit: type=1334 audit(1719332832.021:541): prog-id=175 op=LOAD Jun 25 16:27:12.021000 audit[3833]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.052059 kernel: audit: type=1300 audit(1719332832.021:541): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.052219 kernel: audit: type=1327 audit(1719332832.021:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234376434393666666265666436336539343461313965303432333438 Jun 25 16:27:12.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234376434393666666265666436336539343461313965303432333438 Jun 25 16:27:12.021000 audit: BPF prog-id=176 op=LOAD Jun 25 16:27:12.056148 kernel: audit: type=1334 audit(1719332832.021:542): prog-id=176 op=LOAD Jun 25 16:27:12.066557 kernel: audit: type=1300 audit(1719332832.021:542): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.021000 audit[3833]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234376434393666666265666436336539343461313965303432333438 Jun 25 16:27:12.069728 containerd[1477]: time="2024-06-25T16:27:12.069693849Z" level=info msg="StartContainer for \"b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672\" returns successfully" Jun 25 16:27:12.080128 kernel: audit: type=1327 audit(1719332832.021:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234376434393666666265666436336539343461313965303432333438 Jun 25 16:27:12.021000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:27:12.086039 kernel: audit: type=1334 audit(1719332832.021:543): prog-id=176 op=UNLOAD Jun 25 16:27:12.022000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:27:12.089093 kernel: audit: type=1334 audit(1719332832.022:544): prog-id=175 op=UNLOAD Jun 25 16:27:12.022000 audit: BPF prog-id=177 op=LOAD Jun 25 16:27:12.094396 kernel: audit: type=1334 audit(1719332832.022:545): prog-id=177 op=LOAD Jun 25 16:27:12.022000 audit[3833]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.104786 kernel: audit: type=1300 audit(1719332832.022:545): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3713 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:12.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234376434393666666265666436336539343461313965303432333438 Jun 25 16:27:12.596673 kubelet[2831]: E0625 16:27:12.596630 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:13.469842 containerd[1477]: time="2024-06-25T16:27:13.469786379Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:27:13.471862 systemd[1]: cri-containerd-b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672.scope: Deactivated successfully. Jun 25 16:27:13.474000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:27:13.493901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672-rootfs.mount: Deactivated successfully. Jun 25 16:27:13.510627 kubelet[2831]: I0625 16:27:13.510395 2831 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:27:13.991394 kubelet[2831]: I0625 16:27:13.529306 2831 topology_manager.go:215] "Topology Admit Handler" podUID="75c7c79a-d44c-47ce-a93f-c54170ddf76b" podNamespace="kube-system" podName="coredns-5dd5756b68-d6rxn" Jun 25 16:27:13.991394 kubelet[2831]: W0625 16:27:13.539807 2831 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:27:13.991394 kubelet[2831]: E0625 16:27:13.539837 2831 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-a46e2cd05c" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-a46e2cd05c' and this object Jun 25 16:27:13.991394 kubelet[2831]: I0625 16:27:13.541710 2831 topology_manager.go:215] "Topology Admit Handler" podUID="f124e573-6d0c-4a4e-b4c6-5bf17013ade6" podNamespace="kube-system" podName="coredns-5dd5756b68-45gk9" Jun 25 16:27:13.991394 kubelet[2831]: I0625 16:27:13.544285 2831 topology_manager.go:215] "Topology Admit Handler" podUID="6397532b-83a6-4d2d-bcc3-8908e6d508d3" podNamespace="calico-system" podName="calico-kube-controllers-58cc6dbf49-rlrpx" Jun 25 16:27:13.991394 kubelet[2831]: I0625 16:27:13.620361 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c7c79a-d44c-47ce-a93f-c54170ddf76b-config-volume\") pod \"coredns-5dd5756b68-d6rxn\" (UID: \"75c7c79a-d44c-47ce-a93f-c54170ddf76b\") " pod="kube-system/coredns-5dd5756b68-d6rxn" Jun 25 16:27:13.991394 kubelet[2831]: I0625 16:27:13.620465 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8jj\" (UniqueName: \"kubernetes.io/projected/75c7c79a-d44c-47ce-a93f-c54170ddf76b-kube-api-access-hp8jj\") pod \"coredns-5dd5756b68-d6rxn\" (UID: \"75c7c79a-d44c-47ce-a93f-c54170ddf76b\") " pod="kube-system/coredns-5dd5756b68-d6rxn" Jun 25 16:27:13.535671 systemd[1]: Created slice kubepods-burstable-pod75c7c79a_d44c_47ce_a93f_c54170ddf76b.slice - libcontainer container kubepods-burstable-pod75c7c79a_d44c_47ce_a93f_c54170ddf76b.slice. Jun 25 16:27:13.992523 kubelet[2831]: I0625 16:27:13.721785 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6397532b-83a6-4d2d-bcc3-8908e6d508d3-tigera-ca-bundle\") pod \"calico-kube-controllers-58cc6dbf49-rlrpx\" (UID: \"6397532b-83a6-4d2d-bcc3-8908e6d508d3\") " pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" Jun 25 16:27:13.992523 kubelet[2831]: I0625 16:27:13.721853 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4x9\" (UniqueName: \"kubernetes.io/projected/6397532b-83a6-4d2d-bcc3-8908e6d508d3-kube-api-access-mf4x9\") pod \"calico-kube-controllers-58cc6dbf49-rlrpx\" (UID: \"6397532b-83a6-4d2d-bcc3-8908e6d508d3\") " pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" Jun 25 16:27:13.992523 kubelet[2831]: I0625 16:27:13.721894 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptfn\" (UniqueName: \"kubernetes.io/projected/f124e573-6d0c-4a4e-b4c6-5bf17013ade6-kube-api-access-mptfn\") pod \"coredns-5dd5756b68-45gk9\" (UID: \"f124e573-6d0c-4a4e-b4c6-5bf17013ade6\") " pod="kube-system/coredns-5dd5756b68-45gk9" Jun 25 16:27:13.992523 kubelet[2831]: I0625 16:27:13.721930 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f124e573-6d0c-4a4e-b4c6-5bf17013ade6-config-volume\") pod \"coredns-5dd5756b68-45gk9\" (UID: \"f124e573-6d0c-4a4e-b4c6-5bf17013ade6\") " pod="kube-system/coredns-5dd5756b68-45gk9" Jun 25 16:27:13.549416 systemd[1]: Created slice kubepods-burstable-podf124e573_6d0c_4a4e_b4c6_5bf17013ade6.slice - libcontainer container kubepods-burstable-podf124e573_6d0c_4a4e_b4c6_5bf17013ade6.slice. Jun 25 16:27:13.555671 systemd[1]: Created slice kubepods-besteffort-pod6397532b_83a6_4d2d_bcc3_8908e6d508d3.slice - libcontainer container kubepods-besteffort-pod6397532b_83a6_4d2d_bcc3_8908e6d508d3.slice. Jun 25 16:27:14.310758 containerd[1477]: time="2024-06-25T16:27:14.310608919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc6dbf49-rlrpx,Uid:6397532b-83a6-4d2d-bcc3-8908e6d508d3,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:14.601849 systemd[1]: Created slice kubepods-besteffort-pod8474323b_f265_4427_9f9e_fd6fa285383b.slice - libcontainer container kubepods-besteffort-pod8474323b_f265_4427_9f9e_fd6fa285383b.slice. Jun 25 16:27:14.604488 containerd[1477]: time="2024-06-25T16:27:14.604450163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prpxb,Uid:8474323b-f265-4427-9f9e-fd6fa285383b,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:14.723121 kubelet[2831]: E0625 16:27:14.723078 2831 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:27:14.723474 kubelet[2831]: E0625 16:27:14.723453 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75c7c79a-d44c-47ce-a93f-c54170ddf76b-config-volume podName:75c7c79a-d44c-47ce-a93f-c54170ddf76b nodeName:}" failed. No retries permitted until 2024-06-25 16:27:15.223421506 +0000 UTC m=+48.196090083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/75c7c79a-d44c-47ce-a93f-c54170ddf76b-config-volume") pod "coredns-5dd5756b68-d6rxn" (UID: "75c7c79a-d44c-47ce-a93f-c54170ddf76b") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:27:14.823570 kubelet[2831]: E0625 16:27:14.823516 2831 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:27:14.823813 kubelet[2831]: E0625 16:27:14.823655 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f124e573-6d0c-4a4e-b4c6-5bf17013ade6-config-volume podName:f124e573-6d0c-4a4e-b4c6-5bf17013ade6 nodeName:}" failed. No retries permitted until 2024-06-25 16:27:15.3236259 +0000 UTC m=+48.296294477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f124e573-6d0c-4a4e-b4c6-5bf17013ade6-config-volume") pod "coredns-5dd5756b68-45gk9" (UID: "f124e573-6d0c-4a4e-b4c6-5bf17013ade6") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:27:15.150659 containerd[1477]: time="2024-06-25T16:27:15.150568795Z" level=info msg="shim disconnected" id=b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672 namespace=k8s.io Jun 25 16:27:15.150870 containerd[1477]: time="2024-06-25T16:27:15.150762091Z" level=warning msg="cleaning up after shim disconnected" id=b47d496ffbefd63e944a19e0423486dd1db4b5e9c064dada2d4c05f31a2f1672 namespace=k8s.io Jun 25 16:27:15.150870 containerd[1477]: time="2024-06-25T16:27:15.150782890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:15.286110 containerd[1477]: time="2024-06-25T16:27:15.286020384Z" level=error msg="Failed to destroy network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.289163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521-shm.mount: Deactivated successfully. Jun 25 16:27:15.290388 containerd[1477]: time="2024-06-25T16:27:15.286056783Z" level=error msg="Failed to destroy network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.290663 containerd[1477]: time="2024-06-25T16:27:15.290382981Z" level=error msg="encountered an error cleaning up failed sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.290821 containerd[1477]: time="2024-06-25T16:27:15.290788071Z" level=error msg="encountered an error cleaning up failed sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.290905 containerd[1477]: time="2024-06-25T16:27:15.290840970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prpxb,Uid:8474323b-f265-4427-9f9e-fd6fa285383b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.291015 containerd[1477]: time="2024-06-25T16:27:15.290804871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc6dbf49-rlrpx,Uid:6397532b-83a6-4d2d-bcc3-8908e6d508d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.291160 kubelet[2831]: E0625 16:27:15.291136 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.291482 kubelet[2831]: E0625 16:27:15.291210 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prpxb" Jun 25 16:27:15.291482 kubelet[2831]: E0625 16:27:15.291241 2831 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prpxb" Jun 25 16:27:15.291482 kubelet[2831]: E0625 16:27:15.291314 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-prpxb_calico-system(8474323b-f265-4427-9f9e-fd6fa285383b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-prpxb_calico-system(8474323b-f265-4427-9f9e-fd6fa285383b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:15.292321 kubelet[2831]: E0625 16:27:15.292134 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.292321 kubelet[2831]: E0625 16:27:15.292180 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" Jun 25 16:27:15.292321 kubelet[2831]: E0625 16:27:15.292217 2831 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" Jun 25 16:27:15.292505 kubelet[2831]: E0625 16:27:15.292287 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58cc6dbf49-rlrpx_calico-system(6397532b-83a6-4d2d-bcc3-8908e6d508d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58cc6dbf49-rlrpx_calico-system(6397532b-83a6-4d2d-bcc3-8908e6d508d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" podUID="6397532b-83a6-4d2d-bcc3-8908e6d508d3" Jun 25 16:27:15.498944 containerd[1477]: time="2024-06-25T16:27:15.498782840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45gk9,Uid:f124e573-6d0c-4a4e-b4c6-5bf17013ade6,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:15.499833 containerd[1477]: time="2024-06-25T16:27:15.498782940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6rxn,Uid:75c7c79a-d44c-47ce-a93f-c54170ddf76b,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:15.648469 containerd[1477]: time="2024-06-25T16:27:15.648398394Z" level=error msg="Failed to destroy network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.649366 containerd[1477]: time="2024-06-25T16:27:15.649314672Z" level=error msg="encountered an error cleaning up failed sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.649562 containerd[1477]: time="2024-06-25T16:27:15.649525767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45gk9,Uid:f124e573-6d0c-4a4e-b4c6-5bf17013ade6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.650185 kubelet[2831]: E0625 16:27:15.649940 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.650185 kubelet[2831]: E0625 16:27:15.650007 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-45gk9" Jun 25 16:27:15.650185 kubelet[2831]: E0625 16:27:15.650034 2831 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-45gk9" Jun 25 16:27:15.650475 kubelet[2831]: E0625 16:27:15.650099 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-45gk9_kube-system(f124e573-6d0c-4a4e-b4c6-5bf17013ade6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-45gk9_kube-system(f124e573-6d0c-4a4e-b4c6-5bf17013ade6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-45gk9" podUID="f124e573-6d0c-4a4e-b4c6-5bf17013ade6" Jun 25 16:27:15.657909 containerd[1477]: time="2024-06-25T16:27:15.657848470Z" level=error msg="Failed to destroy network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.658231 containerd[1477]: time="2024-06-25T16:27:15.658187862Z" level=error msg="encountered an error cleaning up failed sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.658315 containerd[1477]: time="2024-06-25T16:27:15.658257960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6rxn,Uid:75c7c79a-d44c-47ce-a93f-c54170ddf76b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.658540 kubelet[2831]: E0625 16:27:15.658515 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.658661 kubelet[2831]: E0625 16:27:15.658618 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d6rxn" Jun 25 16:27:15.658661 kubelet[2831]: E0625 16:27:15.658649 2831 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d6rxn" Jun 25 16:27:15.658759 kubelet[2831]: E0625 16:27:15.658719 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-d6rxn_kube-system(75c7c79a-d44c-47ce-a93f-c54170ddf76b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-d6rxn_kube-system(75c7c79a-d44c-47ce-a93f-c54170ddf76b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d6rxn" podUID="75c7c79a-d44c-47ce-a93f-c54170ddf76b" Jun 25 16:27:15.744649 kubelet[2831]: I0625 16:27:15.744314 2831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:15.745424 containerd[1477]: time="2024-06-25T16:27:15.745372495Z" level=info msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" Jun 25 16:27:15.745905 containerd[1477]: time="2024-06-25T16:27:15.745871183Z" level=info msg="Ensure that sandbox bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521 in task-service has been cleanup successfully" Jun 25 16:27:15.748084 kubelet[2831]: I0625 16:27:15.748048 2831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:15.748694 containerd[1477]: time="2024-06-25T16:27:15.748656217Z" level=info msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" Jun 25 16:27:15.750400 containerd[1477]: time="2024-06-25T16:27:15.749107906Z" level=info msg="Ensure that sandbox da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e in task-service has been cleanup successfully" Jun 25 16:27:15.752583 kubelet[2831]: I0625 16:27:15.752228 2831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:15.752871 containerd[1477]: time="2024-06-25T16:27:15.752843418Z" level=info msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" Jun 25 16:27:15.753687 containerd[1477]: time="2024-06-25T16:27:15.753432404Z" level=info msg="Ensure that sandbox 2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b in task-service has been cleanup successfully" Jun 25 16:27:15.759750 containerd[1477]: time="2024-06-25T16:27:15.759717355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:27:15.761725 kubelet[2831]: I0625 16:27:15.761702 2831 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:15.765012 containerd[1477]: time="2024-06-25T16:27:15.764970730Z" level=info msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" Jun 25 16:27:15.766140 containerd[1477]: time="2024-06-25T16:27:15.766115303Z" level=info msg="Ensure that sandbox 06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467 in task-service has been cleanup successfully" Jun 25 16:27:15.835839 containerd[1477]: time="2024-06-25T16:27:15.835762052Z" level=error msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" failed" error="failed to destroy network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.836106 kubelet[2831]: E0625 16:27:15.836069 2831 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:15.836227 kubelet[2831]: E0625 16:27:15.836213 2831 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521"} Jun 25 16:27:15.836287 kubelet[2831]: E0625 16:27:15.836266 2831 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6397532b-83a6-4d2d-bcc3-8908e6d508d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:15.836397 kubelet[2831]: E0625 16:27:15.836338 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6397532b-83a6-4d2d-bcc3-8908e6d508d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" podUID="6397532b-83a6-4d2d-bcc3-8908e6d508d3" Jun 25 16:27:15.843533 containerd[1477]: time="2024-06-25T16:27:15.843457770Z" level=error msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" failed" error="failed to destroy network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.843828 kubelet[2831]: E0625 16:27:15.843807 2831 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:15.843936 kubelet[2831]: E0625 16:27:15.843852 2831 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e"} Jun 25 16:27:15.843936 kubelet[2831]: E0625 16:27:15.843894 2831 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75c7c79a-d44c-47ce-a93f-c54170ddf76b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:15.843936 kubelet[2831]: E0625 16:27:15.843932 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75c7c79a-d44c-47ce-a93f-c54170ddf76b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d6rxn" podUID="75c7c79a-d44c-47ce-a93f-c54170ddf76b" Jun 25 16:27:15.857890 containerd[1477]: time="2024-06-25T16:27:15.857821829Z" level=error msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" failed" error="failed to destroy network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.858460 kubelet[2831]: E0625 16:27:15.858244 2831 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:15.858460 kubelet[2831]: E0625 16:27:15.858292 2831 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b"} Jun 25 16:27:15.858460 kubelet[2831]: E0625 16:27:15.858367 2831 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f124e573-6d0c-4a4e-b4c6-5bf17013ade6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:15.858460 kubelet[2831]: E0625 16:27:15.858421 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f124e573-6d0c-4a4e-b4c6-5bf17013ade6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-45gk9" podUID="f124e573-6d0c-4a4e-b4c6-5bf17013ade6" Jun 25 16:27:15.862880 containerd[1477]: time="2024-06-25T16:27:15.862823711Z" level=error msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" failed" error="failed to destroy network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:15.863109 kubelet[2831]: E0625 16:27:15.863074 2831 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:15.863216 kubelet[2831]: E0625 16:27:15.863117 2831 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467"} Jun 25 16:27:15.863216 kubelet[2831]: E0625 16:27:15.863159 2831 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8474323b-f265-4427-9f9e-fd6fa285383b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:15.863216 kubelet[2831]: E0625 16:27:15.863201 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8474323b-f265-4427-9f9e-fd6fa285383b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prpxb" podUID="8474323b-f265-4427-9f9e-fd6fa285383b" Jun 25 16:27:16.186965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467-shm.mount: Deactivated successfully. Jun 25 16:27:23.339000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.354006 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:27:23.354162 kernel: audit: type=1400 audit(1719332843.339:547): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.339000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f1d0a0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:23.373730 kernel: audit: type=1300 audit(1719332843.339:547): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f1d0a0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:23.339000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:23.393768 kernel: audit: type=1327 audit(1719332843.339:547): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:23.408684 kernel: audit: type=1400 audit(1719332843.340:548): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.340000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.340000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e712c0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:23.425623 kernel: audit: type=1300 audit(1719332843.340:548): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e712c0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:23.340000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:23.440629 kernel: audit: type=1327 audit(1719332843.340:548): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.613691 kernel: audit: type=1400 audit(1719332843.601:549): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00c178fa0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.629633 kernel: audit: type=1300 audit(1719332843.601:549): arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00c178fa0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.648700 kernel: audit: type=1327 audit(1719332843.601:549): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00cec71a0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00cdb1f20 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.661731 kernel: audit: type=1400 audit(1719332843.601:550): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.628000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.628000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c00d05efc0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.628000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.660000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.660000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c00d05f9b0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.660000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:23.660000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:23.660000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00746adc0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:27:23.660000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:24.234027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720419566.mount: Deactivated successfully. Jun 25 16:27:24.278853 containerd[1477]: time="2024-06-25T16:27:24.278794938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.282612 containerd[1477]: time="2024-06-25T16:27:24.282532958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:27:24.285724 containerd[1477]: time="2024-06-25T16:27:24.285681991Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.291680 containerd[1477]: time="2024-06-25T16:27:24.291644063Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.295155 containerd[1477]: time="2024-06-25T16:27:24.295080190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.295714 containerd[1477]: time="2024-06-25T16:27:24.295669178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.534580155s" Jun 25 16:27:24.295866 containerd[1477]: time="2024-06-25T16:27:24.295721676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:27:24.314348 containerd[1477]: time="2024-06-25T16:27:24.314302980Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:27:24.371860 containerd[1477]: time="2024-06-25T16:27:24.371806253Z" level=info msg="CreateContainer within sandbox \"2c3c1ef5151007e86fb11cbdb341da41880061489280a294fb8cbd8d7c098b6a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483\"" Jun 25 16:27:24.373053 containerd[1477]: time="2024-06-25T16:27:24.373020527Z" level=info msg="StartContainer for \"a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483\"" Jun 25 16:27:24.397856 systemd[1]: Started cri-containerd-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483.scope - libcontainer container a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483. Jun 25 16:27:24.413000 audit: BPF prog-id=178 op=LOAD Jun 25 16:27:24.413000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3713 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138343861333965393762376166323632353737346462363234313631 Jun 25 16:27:24.414000 audit: BPF prog-id=179 op=LOAD Jun 25 16:27:24.414000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3713 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138343861333965393762376166323632353737346462363234313631 Jun 25 16:27:24.414000 audit: BPF prog-id=179 op=UNLOAD Jun 25 16:27:24.414000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:27:24.414000 audit: BPF prog-id=180 op=LOAD Jun 25 16:27:24.414000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3713 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:24.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138343861333965393762376166323632353737346462363234313631 Jun 25 16:27:24.438020 containerd[1477]: time="2024-06-25T16:27:24.437963541Z" level=info msg="StartContainer for \"a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483\" returns successfully" Jun 25 16:27:24.742088 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:27:24.742504 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:27:24.830169 kubelet[2831]: I0625 16:27:24.830125 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-fffvm" podStartSLOduration=2.245515859 podCreationTimestamp="2024-06-25 16:27:00 +0000 UTC" firstStartedPulling="2024-06-25 16:27:01.71163705 +0000 UTC m=+34.684305627" lastFinishedPulling="2024-06-25 16:27:24.296191266 +0000 UTC m=+57.268859743" observedRunningTime="2024-06-25 16:27:24.828257614 +0000 UTC m=+57.800926191" watchObservedRunningTime="2024-06-25 16:27:24.830069975 +0000 UTC m=+57.802738452" Jun 25 16:27:25.827268 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.oyJWzX.mount: Deactivated successfully. Jun 25 16:27:26.208000 audit[4267]: AVC avc: denied { write } for pid=4267 comm="tee" name="fd" dev="proc" ino=32405 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.208000 audit[4267]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5d3d2a0e a2=241 a3=1b6 items=1 ppid=4231 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.208000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:27:26.208000 audit: PATH item=0 name="/dev/fd/63" inode=32928 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.224000 audit[4264]: AVC avc: denied { write } for pid=4264 comm="tee" name="fd" dev="proc" ino=32946 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.224000 audit[4264]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb340fa0c a2=241 a3=1b6 items=1 ppid=4236 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.224000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:27:26.224000 audit: PATH item=0 name="/dev/fd/63" inode=32927 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.239000 audit[4270]: AVC avc: denied { write } for pid=4270 comm="tee" name="fd" dev="proc" ino=32954 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.243000 audit[4281]: AVC avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=32957 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.244000 audit[4275]: AVC avc: denied { write } for pid=4275 comm="tee" name="fd" dev="proc" ino=32959 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.244000 audit[4275]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd08b69a0d a2=241 a3=1b6 items=1 ppid=4235 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.239000 audit[4270]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed41aca0c a2=241 a3=1b6 items=1 ppid=4243 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.244000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:27:26.244000 audit: PATH item=0 name="/dev/fd/63" inode=32935 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.239000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:27:26.239000 audit: PATH item=0 name="/dev/fd/63" inode=32409 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.243000 audit[4281]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa5cac9fd a2=241 a3=1b6 items=1 ppid=4245 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.243000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:27:26.243000 audit: PATH item=0 name="/dev/fd/63" inode=32949 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.274000 audit[4294]: AVC avc: denied { write } for pid=4294 comm="tee" name="fd" dev="proc" ino=32418 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.274000 audit[4294]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc76557a0c a2=241 a3=1b6 items=1 ppid=4239 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.274000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:27:26.274000 audit: PATH item=0 name="/dev/fd/63" inode=32968 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.289000 audit[4307]: AVC avc: denied { write } for pid=4307 comm="tee" name="fd" dev="proc" ino=32424 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:26.289000 audit[4307]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea235c9fc a2=241 a3=1b6 items=1 ppid=4255 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.289000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:27:26.289000 audit: PATH item=0 name="/dev/fd/63" inode=32976 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:26.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:26.637489 systemd-networkd[1234]: vxlan.calico: Link UP Jun 25 16:27:26.637500 systemd-networkd[1234]: vxlan.calico: Gained carrier Jun 25 16:27:26.657000 audit: BPF prog-id=181 op=LOAD Jun 25 16:27:26.657000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff790c54b0 a2=70 a3=7fb235623000 items=0 ppid=4238 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.657000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:26.657000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:27:26.657000 audit: BPF prog-id=182 op=LOAD Jun 25 16:27:26.657000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff790c54b0 a2=70 a3=6f items=0 ppid=4238 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.657000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:26.658000 audit: BPF prog-id=182 op=UNLOAD Jun 25 16:27:26.658000 audit: BPF prog-id=183 op=LOAD Jun 25 16:27:26.658000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff790c5440 a2=70 a3=7fff790c54b0 items=0 ppid=4238 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.658000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:26.658000 audit: BPF prog-id=183 op=UNLOAD Jun 25 16:27:26.659000 audit: BPF prog-id=184 op=LOAD Jun 25 16:27:26.659000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff790c5470 a2=70 a3=0 items=0 ppid=4238 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.659000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:26.668000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:27:26.770000 audit[4402]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:26.770000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffad6a30f0 a2=0 a3=7fffad6a30dc items=0 ppid=4238 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.770000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:26.774000 audit[4401]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:26.774000 audit[4401]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffedf64d380 a2=0 a3=7ffedf64d36c items=0 ppid=4238 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:26.787000 audit[4403]: NETFILTER_CFG table=filter:107 family=2 entries=39 op=nft_register_chain pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:26.787000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffda18dfd30 a2=0 a3=7ffda18dfd1c items=0 ppid=4238 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.787000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:26.799000 audit[4400]: NETFILTER_CFG table=raw:108 family=2 entries=19 op=nft_register_chain pid=4400 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:26.799000 audit[4400]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffe0a080360 a2=0 a3=7ffe0a08034c items=0 ppid=4238 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.799000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:27.039000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:27.039000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00016e0c0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:27.039000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:27.041000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:27.041000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f1dde0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:27.041000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00016e140 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c7a440 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:27:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:27.591787 containerd[1477]: time="2024-06-25T16:27:27.591744998Z" level=info msg="StopPodSandbox for \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\"" Jun 25 16:27:27.592259 containerd[1477]: time="2024-06-25T16:27:27.591848296Z" level=info msg="TearDown network for sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" successfully" Jun 25 16:27:27.592259 containerd[1477]: time="2024-06-25T16:27:27.591898895Z" level=info msg="StopPodSandbox for \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" returns successfully" Jun 25 16:27:27.592365 containerd[1477]: time="2024-06-25T16:27:27.592282087Z" level=info msg="RemovePodSandbox for \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\"" Jun 25 16:27:27.592365 containerd[1477]: time="2024-06-25T16:27:27.592316286Z" level=info msg="Forcibly stopping sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\"" Jun 25 16:27:27.592456 containerd[1477]: time="2024-06-25T16:27:27.592405485Z" level=info msg="TearDown network for sandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" successfully" Jun 25 16:27:27.602142 containerd[1477]: time="2024-06-25T16:27:27.602102584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:27.602385 containerd[1477]: time="2024-06-25T16:27:27.602345679Z" level=info msg="RemovePodSandbox \"317b743338bd6afb8c9c593b626b8dffb86cb5762b5915bae05a496381dcc049\" returns successfully" Jun 25 16:27:27.602929 containerd[1477]: time="2024-06-25T16:27:27.602896768Z" level=info msg="StopPodSandbox for \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\"" Jun 25 16:27:27.603031 containerd[1477]: time="2024-06-25T16:27:27.602981866Z" level=info msg="TearDown network for sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" successfully" Jun 25 16:27:27.603087 containerd[1477]: time="2024-06-25T16:27:27.603032165Z" level=info msg="StopPodSandbox for \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" returns successfully" Jun 25 16:27:27.603355 containerd[1477]: time="2024-06-25T16:27:27.603331659Z" level=info msg="RemovePodSandbox for \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\"" Jun 25 16:27:27.603433 containerd[1477]: time="2024-06-25T16:27:27.603363458Z" level=info msg="Forcibly stopping sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\"" Jun 25 16:27:27.603482 containerd[1477]: time="2024-06-25T16:27:27.603455256Z" level=info msg="TearDown network for sandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" successfully" Jun 25 16:27:27.626085 containerd[1477]: time="2024-06-25T16:27:27.626036989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:27.626265 containerd[1477]: time="2024-06-25T16:27:27.626155286Z" level=info msg="RemovePodSandbox \"79ff8e32d2e5ac4d296978185acf03ca0def9ef7e4bbf4fc456743133c6798f1\" returns successfully" Jun 25 16:27:28.006742 systemd-networkd[1234]: vxlan.calico: Gained IPv6LL Jun 25 16:27:29.597974 containerd[1477]: time="2024-06-25T16:27:29.597905341Z" level=info msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.644 [INFO][4431] k8s.go 608: Cleaning up netns ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.644 [INFO][4431] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" iface="eth0" netns="/var/run/netns/cni-042b1a3b-0448-5fc9-f09c-e69b1d1417ae" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.644 [INFO][4431] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" iface="eth0" netns="/var/run/netns/cni-042b1a3b-0448-5fc9-f09c-e69b1d1417ae" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.645 [INFO][4431] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" iface="eth0" netns="/var/run/netns/cni-042b1a3b-0448-5fc9-f09c-e69b1d1417ae" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.645 [INFO][4431] k8s.go 615: Releasing IP address(es) ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.645 [INFO][4431] utils.go 188: Calico CNI releasing IP address ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.666 [INFO][4437] ipam_plugin.go 411: Releasing address using handleID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.666 [INFO][4437] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.667 [INFO][4437] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.672 [WARNING][4437] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.672 [INFO][4437] ipam_plugin.go 439: Releasing address using workloadID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.673 [INFO][4437] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:29.675678 containerd[1477]: 2024-06-25 16:27:29.674 [INFO][4431] k8s.go 621: Teardown processing complete. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:27:29.680575 containerd[1477]: time="2024-06-25T16:27:29.679147493Z" level=info msg="TearDown network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" successfully" Jun 25 16:27:29.680575 containerd[1477]: time="2024-06-25T16:27:29.679197892Z" level=info msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" returns successfully" Jun 25 16:27:29.679148 systemd[1]: run-netns-cni\x2d042b1a3b\x2d0448\x2d5fc9\x2df09c\x2de69b1d1417ae.mount: Deactivated successfully. Jun 25 16:27:29.681267 containerd[1477]: time="2024-06-25T16:27:29.681232951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prpxb,Uid:8474323b-f265-4427-9f9e-fd6fa285383b,Namespace:calico-system,Attempt:1,}" Jun 25 16:27:29.816222 systemd-networkd[1234]: cali533385cd980: Link UP Jun 25 16:27:29.822584 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:29.822744 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali533385cd980: link becomes ready Jun 25 16:27:29.823514 systemd-networkd[1234]: cali533385cd980: Gained carrier Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.747 [INFO][4444] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0 csi-node-driver- calico-system 8474323b-f265-4427-9f9e-fd6fa285383b 803 0 2024-06-25 16:26:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c csi-node-driver-prpxb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali533385cd980 [] []}} ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.747 [INFO][4444] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.772 [INFO][4455] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" HandleID="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.780 [INFO][4455] ipam_plugin.go 264: Auto assigning IP ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" HandleID="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000271de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"csi-node-driver-prpxb", "timestamp":"2024-06-25 16:27:29.772615297 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.780 [INFO][4455] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.780 [INFO][4455] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.780 [INFO][4455] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.781 [INFO][4455] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.785 [INFO][4455] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.788 [INFO][4455] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.790 [INFO][4455] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.792 [INFO][4455] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.792 [INFO][4455] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.793 [INFO][4455] ipam.go 1685: Creating new handle: k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.802 [INFO][4455] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.807 [INFO][4455] ipam.go 1216: Successfully claimed IPs: [192.168.52.193/26] block=192.168.52.192/26 handle="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.807 [INFO][4455] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.193/26] handle="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.807 [INFO][4455] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:29.841799 containerd[1477]: 2024-06-25 16:27:29.807 [INFO][4455] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.193/26] IPv6=[] ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" HandleID="k8s-pod-network.b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.810 [INFO][4444] k8s.go 386: Populated endpoint ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8474323b-f265-4427-9f9e-fd6fa285383b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"csi-node-driver-prpxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali533385cd980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.810 [INFO][4444] k8s.go 387: Calico CNI using IPs: [192.168.52.193/32] ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.810 [INFO][4444] dataplane_linux.go 68: Setting the host side veth name to cali533385cd980 ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.824 [INFO][4444] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.824 [INFO][4444] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8474323b-f265-4427-9f9e-fd6fa285383b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c", Pod:"csi-node-driver-prpxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali533385cd980", MAC:"1a:2b:34:54:d4:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:29.842892 containerd[1477]: 2024-06-25 16:27:29.839 [INFO][4444] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c" Namespace="calico-system" Pod="csi-node-driver-prpxb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:27:29.872313 kernel: kauditd_printk_skb: 100 callbacks suppressed Jun 25 16:27:29.872494 kernel: audit: type=1325 audit(1719332849.861:583): table=filter:109 family=2 entries=34 op=nft_register_chain pid=4477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:29.861000 audit[4477]: NETFILTER_CFG table=filter:109 family=2 entries=34 op=nft_register_chain pid=4477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:29.861000 audit[4477]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffe8139c720 a2=0 a3=7ffe8139c70c items=0 ppid=4238 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.889617 kernel: audit: type=1300 audit(1719332849.861:583): arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffe8139c720 a2=0 a3=7ffe8139c70c items=0 ppid=4238 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.889672 containerd[1477]: time="2024-06-25T16:27:29.878743144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:29.889672 containerd[1477]: time="2024-06-25T16:27:29.878845542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:29.889672 containerd[1477]: time="2024-06-25T16:27:29.878879541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:29.889672 containerd[1477]: time="2024-06-25T16:27:29.878911341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:29.861000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:29.906272 kernel: audit: type=1327 audit(1719332849.861:583): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:29.913759 systemd[1]: Started cri-containerd-b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c.scope - libcontainer container b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c. Jun 25 16:27:29.923000 audit: BPF prog-id=185 op=LOAD Jun 25 16:27:29.926000 audit: BPF prog-id=186 op=LOAD Jun 25 16:27:29.930313 kernel: audit: type=1334 audit(1719332849.923:584): prog-id=185 op=LOAD Jun 25 16:27:29.930401 kernel: audit: type=1334 audit(1719332849.926:585): prog-id=186 op=LOAD Jun 25 16:27:29.926000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.939518 kernel: audit: type=1300 audit(1719332849.926:585): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623730643733363637393065363836656265396662376264333832 Jun 25 16:27:29.952630 kernel: audit: type=1327 audit(1719332849.926:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623730643733363637393065363836656265396662376264333832 Jun 25 16:27:29.926000 audit: BPF prog-id=187 op=LOAD Jun 25 16:27:29.926000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.961348 containerd[1477]: time="2024-06-25T16:27:29.956575665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prpxb,Uid:8474323b-f265-4427-9f9e-fd6fa285383b,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c\"" Jun 25 16:27:29.961348 containerd[1477]: time="2024-06-25T16:27:29.960435187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:27:29.964880 kernel: audit: type=1334 audit(1719332849.926:586): prog-id=187 op=LOAD Jun 25 16:27:29.964994 kernel: audit: type=1300 audit(1719332849.926:586): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623730643733363637393065363836656265396662376264333832 Jun 25 16:27:29.975474 kernel: audit: type=1327 audit(1719332849.926:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623730643733363637393065363836656265396662376264333832 Jun 25 16:27:29.926000 audit: BPF prog-id=187 op=UNLOAD Jun 25 16:27:29.926000 audit: BPF prog-id=186 op=UNLOAD Jun 25 16:27:29.926000 audit: BPF prog-id=188 op=LOAD Jun 25 16:27:29.926000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:29.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623730643733363637393065363836656265396662376264333832 Jun 25 16:27:30.599641 containerd[1477]: time="2024-06-25T16:27:30.598166164Z" level=info msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" Jun 25 16:27:30.600390 containerd[1477]: time="2024-06-25T16:27:30.598168364Z" level=info msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.663 [INFO][4541] k8s.go 608: Cleaning up netns ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.663 [INFO][4541] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" iface="eth0" netns="/var/run/netns/cni-55eb7082-2054-48e6-d3a3-c8b367b72ebe" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.663 [INFO][4541] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" iface="eth0" netns="/var/run/netns/cni-55eb7082-2054-48e6-d3a3-c8b367b72ebe" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.664 [INFO][4541] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" iface="eth0" netns="/var/run/netns/cni-55eb7082-2054-48e6-d3a3-c8b367b72ebe" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.664 [INFO][4541] k8s.go 615: Releasing IP address(es) ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.664 [INFO][4541] utils.go 188: Calico CNI releasing IP address ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.698 [INFO][4563] ipam_plugin.go 411: Releasing address using handleID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.698 [INFO][4563] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.699 [INFO][4563] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.708 [WARNING][4563] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.708 [INFO][4563] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.709 [INFO][4563] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:30.712295 containerd[1477]: 2024-06-25 16:27:30.711 [INFO][4541] k8s.go 621: Teardown processing complete. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:27:30.716531 systemd[1]: run-netns-cni\x2d55eb7082\x2d2054\x2d48e6\x2dd3a3\x2dc8b367b72ebe.mount: Deactivated successfully. Jun 25 16:27:30.717889 containerd[1477]: time="2024-06-25T16:27:30.717840559Z" level=info msg="TearDown network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" successfully" Jun 25 16:27:30.718141 containerd[1477]: time="2024-06-25T16:27:30.718110154Z" level=info msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" returns successfully" Jun 25 16:27:30.719332 containerd[1477]: time="2024-06-25T16:27:30.719303230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc6dbf49-rlrpx,Uid:6397532b-83a6-4d2d-bcc3-8908e6d508d3,Namespace:calico-system,Attempt:1,}" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] k8s.go 608: Cleaning up netns ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" iface="eth0" netns="/var/run/netns/cni-a32fcad4-72ef-b99f-c9b0-3925ac075e2e" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" iface="eth0" netns="/var/run/netns/cni-a32fcad4-72ef-b99f-c9b0-3925ac075e2e" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" iface="eth0" netns="/var/run/netns/cni-a32fcad4-72ef-b99f-c9b0-3925ac075e2e" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] k8s.go 615: Releasing IP address(es) ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.676 [INFO][4556] utils.go 188: Calico CNI releasing IP address ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.712 [INFO][4568] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.713 [INFO][4568] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.717 [INFO][4568] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.723 [WARNING][4568] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.723 [INFO][4568] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.724 [INFO][4568] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:30.727114 containerd[1477]: 2024-06-25 16:27:30.725 [INFO][4556] k8s.go 621: Teardown processing complete. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:27:30.730936 systemd[1]: run-netns-cni\x2da32fcad4\x2d72ef\x2db99f\x2dc9b0\x2d3925ac075e2e.mount: Deactivated successfully. Jun 25 16:27:30.731317 containerd[1477]: time="2024-06-25T16:27:30.731272889Z" level=info msg="TearDown network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" successfully" Jun 25 16:27:30.731423 containerd[1477]: time="2024-06-25T16:27:30.731404887Z" level=info msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" returns successfully" Jun 25 16:27:30.733424 containerd[1477]: time="2024-06-25T16:27:30.733390447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45gk9,Uid:f124e573-6d0c-4a4e-b4c6-5bf17013ade6,Namespace:kube-system,Attempt:1,}" Jun 25 16:27:30.961737 systemd-networkd[1234]: califccca6ac0c0: Link UP Jun 25 16:27:30.973292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:30.973413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califccca6ac0c0: link becomes ready Jun 25 16:27:30.975232 systemd-networkd[1234]: califccca6ac0c0: Gained carrier Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.880 [INFO][4586] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0 calico-kube-controllers-58cc6dbf49- calico-system 6397532b-83a6-4d2d-bcc3-8908e6d508d3 812 0 2024-06-25 16:26:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58cc6dbf49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c calico-kube-controllers-58cc6dbf49-rlrpx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califccca6ac0c0 [] []}} ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.881 [INFO][4586] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.926 [INFO][4608] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" HandleID="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4608] ipam_plugin.go 264: Auto assigning IP ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" HandleID="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dded0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"calico-kube-controllers-58cc6dbf49-rlrpx", "timestamp":"2024-06-25 16:27:30.926430568 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4608] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4608] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4608] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.939 [INFO][4608] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.942 [INFO][4608] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.945 [INFO][4608] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.947 [INFO][4608] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.948 [INFO][4608] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.948 [INFO][4608] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.950 [INFO][4608] ipam.go 1685: Creating new handle: k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7 Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.953 [INFO][4608] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.957 [INFO][4608] ipam.go 1216: Successfully claimed IPs: [192.168.52.194/26] block=192.168.52.192/26 handle="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.957 [INFO][4608] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.194/26] handle="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.957 [INFO][4608] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:30.998005 containerd[1477]: 2024-06-25 16:27:30.957 [INFO][4608] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.194/26] IPv6=[] ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" HandleID="k8s-pod-network.e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.958 [INFO][4586] k8s.go 386: Populated endpoint ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0", GenerateName:"calico-kube-controllers-58cc6dbf49-", Namespace:"calico-system", SelfLink:"", UID:"6397532b-83a6-4d2d-bcc3-8908e6d508d3", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc6dbf49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"calico-kube-controllers-58cc6dbf49-rlrpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califccca6ac0c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.959 [INFO][4586] k8s.go 387: Calico CNI using IPs: [192.168.52.194/32] ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.959 [INFO][4586] dataplane_linux.go 68: Setting the host side veth name to califccca6ac0c0 ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.962 [INFO][4586] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.976 [INFO][4586] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0", GenerateName:"calico-kube-controllers-58cc6dbf49-", Namespace:"calico-system", SelfLink:"", UID:"6397532b-83a6-4d2d-bcc3-8908e6d508d3", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc6dbf49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7", Pod:"calico-kube-controllers-58cc6dbf49-rlrpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califccca6ac0c0", MAC:"ae:b5:9e:49:63:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:30.999074 containerd[1477]: 2024-06-25 16:27:30.992 [INFO][4586] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7" Namespace="calico-system" Pod="calico-kube-controllers-58cc6dbf49-rlrpx" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:27:31.037911 systemd-networkd[1234]: cali715d7d8a873: Link UP Jun 25 16:27:31.042701 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali715d7d8a873: link becomes ready Jun 25 16:27:31.042438 systemd-networkd[1234]: cali715d7d8a873: Gained carrier Jun 25 16:27:31.051000 audit[4637]: NETFILTER_CFG table=filter:110 family=2 entries=34 op=nft_register_chain pid=4637 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:31.051000 audit[4637]: SYSCALL arch=c000003e syscall=46 success=yes exit=18640 a0=3 a1=7ffc41c77b30 a2=0 a3=7ffc41c77b1c items=0 ppid=4238 pid=4637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.051000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:31.059925 containerd[1477]: time="2024-06-25T16:27:31.059743900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:31.059925 containerd[1477]: time="2024-06-25T16:27:31.059862697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:31.059925 containerd[1477]: time="2024-06-25T16:27:31.059883897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:31.059925 containerd[1477]: time="2024-06-25T16:27:31.059896697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.876 [INFO][4575] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0 coredns-5dd5756b68- kube-system f124e573-6d0c-4a4e-b4c6-5bf17013ade6 813 0 2024-06-25 16:26:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c coredns-5dd5756b68-45gk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali715d7d8a873 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.878 [INFO][4575] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.926 [INFO][4603] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" HandleID="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4603] ipam_plugin.go 264: Auto assigning IP ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" HandleID="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030d3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"coredns-5dd5756b68-45gk9", "timestamp":"2024-06-25 16:27:30.926431068 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.937 [INFO][4603] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.957 [INFO][4603] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.958 [INFO][4603] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.962 [INFO][4603] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.983 [INFO][4603] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:30.997 [INFO][4603] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.004 [INFO][4603] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.007 [INFO][4603] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.007 [INFO][4603] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.009 [INFO][4603] ipam.go 1685: Creating new handle: k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245 Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.017 [INFO][4603] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.028 [INFO][4603] ipam.go 1216: Successfully claimed IPs: [192.168.52.195/26] block=192.168.52.192/26 handle="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.028 [INFO][4603] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.195/26] handle="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.028 [INFO][4603] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:31.068243 containerd[1477]: 2024-06-25 16:27:31.028 [INFO][4603] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.195/26] IPv6=[] ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" HandleID="k8s-pod-network.1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.032 [INFO][4575] k8s.go 386: Populated endpoint ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f124e573-6d0c-4a4e-b4c6-5bf17013ade6", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"coredns-5dd5756b68-45gk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d7d8a873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.032 [INFO][4575] k8s.go 387: Calico CNI using IPs: [192.168.52.195/32] ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.032 [INFO][4575] dataplane_linux.go 68: Setting the host side veth name to cali715d7d8a873 ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.042 [INFO][4575] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.043 [INFO][4575] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f124e573-6d0c-4a4e-b4c6-5bf17013ade6", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245", Pod:"coredns-5dd5756b68-45gk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d7d8a873", MAC:"0e:9a:8a:26:d5:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:31.069299 containerd[1477]: 2024-06-25 16:27:31.066 [INFO][4575] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245" Namespace="kube-system" Pod="coredns-5dd5756b68-45gk9" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:27:31.097816 systemd[1]: Started cri-containerd-e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7.scope - libcontainer container e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7. Jun 25 16:27:31.117000 audit[4681]: NETFILTER_CFG table=filter:111 family=2 entries=42 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:31.117000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff2ac3cce0 a2=0 a3=7fff2ac3cccc items=0 ppid=4238 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.117000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:31.156370 containerd[1477]: time="2024-06-25T16:27:31.156256478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:31.156626 containerd[1477]: time="2024-06-25T16:27:31.156326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:31.156626 containerd[1477]: time="2024-06-25T16:27:31.156394975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:31.156626 containerd[1477]: time="2024-06-25T16:27:31.156425875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:31.174000 audit: BPF prog-id=189 op=LOAD Jun 25 16:27:31.174000 audit: BPF prog-id=190 op=LOAD Jun 25 16:27:31.174000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336131306663666636313636653231343736643434636635633266 Jun 25 16:27:31.174000 audit: BPF prog-id=191 op=LOAD Jun 25 16:27:31.174000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336131306663666636313636653231343736643434636635633266 Jun 25 16:27:31.174000 audit: BPF prog-id=191 op=UNLOAD Jun 25 16:27:31.174000 audit: BPF prog-id=190 op=UNLOAD Jun 25 16:27:31.174000 audit: BPF prog-id=192 op=LOAD Jun 25 16:27:31.174000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336131306663666636313636653231343736643434636635633266 Jun 25 16:27:31.184788 systemd[1]: Started cri-containerd-1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245.scope - libcontainer container 1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245. Jun 25 16:27:31.197000 audit: BPF prog-id=193 op=LOAD Jun 25 16:27:31.198000 audit: BPF prog-id=194 op=LOAD Jun 25 16:27:31.198000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4704 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346663376634613036333634373833373661346534323735616333 Jun 25 16:27:31.198000 audit: BPF prog-id=195 op=LOAD Jun 25 16:27:31.198000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4704 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346663376634613036333634373833373661346534323735616333 Jun 25 16:27:31.198000 audit: BPF prog-id=195 op=UNLOAD Jun 25 16:27:31.198000 audit: BPF prog-id=194 op=UNLOAD Jun 25 16:27:31.198000 audit: BPF prog-id=196 op=LOAD Jun 25 16:27:31.198000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4704 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346663376634613036333634373833373661346534323735616333 Jun 25 16:27:31.255219 containerd[1477]: time="2024-06-25T16:27:31.255067411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45gk9,Uid:f124e573-6d0c-4a4e-b4c6-5bf17013ade6,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245\"" Jun 25 16:27:31.261874 containerd[1477]: time="2024-06-25T16:27:31.261822976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc6dbf49-rlrpx,Uid:6397532b-83a6-4d2d-bcc3-8908e6d508d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7\"" Jun 25 16:27:31.262732 containerd[1477]: time="2024-06-25T16:27:31.262700559Z" level=info msg="CreateContainer within sandbox \"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:27:31.312887 containerd[1477]: time="2024-06-25T16:27:31.312834461Z" level=info msg="CreateContainer within sandbox \"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef4920d19e7835c6d034b1e05184fd593d1dce1c40a152df7d2735ee4d53bf2c\"" Jun 25 16:27:31.313509 containerd[1477]: time="2024-06-25T16:27:31.313474248Z" level=info msg="StartContainer for \"ef4920d19e7835c6d034b1e05184fd593d1dce1c40a152df7d2735ee4d53bf2c\"" Jun 25 16:27:31.337815 systemd[1]: Started cri-containerd-ef4920d19e7835c6d034b1e05184fd593d1dce1c40a152df7d2735ee4d53bf2c.scope - libcontainer container ef4920d19e7835c6d034b1e05184fd593d1dce1c40a152df7d2735ee4d53bf2c. Jun 25 16:27:31.350000 audit: BPF prog-id=197 op=LOAD Jun 25 16:27:31.351000 audit: BPF prog-id=198 op=LOAD Jun 25 16:27:31.351000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4704 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343932306431396537383335633664303334623165303531383466 Jun 25 16:27:31.351000 audit: BPF prog-id=199 op=LOAD Jun 25 16:27:31.351000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4704 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343932306431396537383335633664303334623165303531383466 Jun 25 16:27:31.351000 audit: BPF prog-id=199 op=UNLOAD Jun 25 16:27:31.351000 audit: BPF prog-id=198 op=UNLOAD Jun 25 16:27:31.351000 audit: BPF prog-id=200 op=LOAD Jun 25 16:27:31.351000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4704 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343932306431396537383335633664303334623165303531383466 Jun 25 16:27:31.378207 containerd[1477]: time="2024-06-25T16:27:31.378150360Z" level=info msg="StartContainer for \"ef4920d19e7835c6d034b1e05184fd593d1dce1c40a152df7d2735ee4d53bf2c\" returns successfully" Jun 25 16:27:31.462786 systemd-networkd[1234]: cali533385cd980: Gained IPv6LL Jun 25 16:27:31.600434 containerd[1477]: time="2024-06-25T16:27:31.599098561Z" level=info msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.643 [INFO][4794] k8s.go 608: Cleaning up netns ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.643 [INFO][4794] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" iface="eth0" netns="/var/run/netns/cni-b8992b07-c729-ca3a-c971-968ce0ab0a2c" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.644 [INFO][4794] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" iface="eth0" netns="/var/run/netns/cni-b8992b07-c729-ca3a-c971-968ce0ab0a2c" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.644 [INFO][4794] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" iface="eth0" netns="/var/run/netns/cni-b8992b07-c729-ca3a-c971-968ce0ab0a2c" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.644 [INFO][4794] k8s.go 615: Releasing IP address(es) ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.644 [INFO][4794] utils.go 188: Calico CNI releasing IP address ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.675 [INFO][4801] ipam_plugin.go 411: Releasing address using handleID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.675 [INFO][4801] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.675 [INFO][4801] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.681 [WARNING][4801] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.681 [INFO][4801] ipam_plugin.go 439: Releasing address using workloadID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.683 [INFO][4801] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:31.685478 containerd[1477]: 2024-06-25 16:27:31.684 [INFO][4794] k8s.go 621: Teardown processing complete. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:27:31.686385 containerd[1477]: time="2024-06-25T16:27:31.685723837Z" level=info msg="TearDown network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" successfully" Jun 25 16:27:31.686385 containerd[1477]: time="2024-06-25T16:27:31.685774736Z" level=info msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" returns successfully" Jun 25 16:27:31.686907 containerd[1477]: time="2024-06-25T16:27:31.686865114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6rxn,Uid:75c7c79a-d44c-47ce-a93f-c54170ddf76b,Namespace:kube-system,Attempt:1,}" Jun 25 16:27:31.720455 systemd[1]: run-netns-cni\x2db8992b07\x2dc729\x2dca3a\x2dc971\x2d968ce0ab0a2c.mount: Deactivated successfully. Jun 25 16:27:31.874657 kubelet[2831]: I0625 16:27:31.874525 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-45gk9" podStartSLOduration=51.874476179 podCreationTimestamp="2024-06-25 16:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:31.848158003 +0000 UTC m=+64.820826580" watchObservedRunningTime="2024-06-25 16:27:31.874476179 +0000 UTC m=+64.847144656" Jun 25 16:27:31.921000 audit[4837]: NETFILTER_CFG table=filter:112 family=2 entries=14 op=nft_register_rule pid=4837 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:31.921000 audit[4837]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffecb869ac0 a2=0 a3=7ffecb869aac items=0 ppid=2968 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:31.922000 audit[4837]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4837 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:31.922000 audit[4837]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffecb869ac0 a2=0 a3=0 items=0 ppid=2968 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:31.966785 systemd-networkd[1234]: cali244ab3f2626: Link UP Jun 25 16:27:31.969000 audit[4839]: NETFILTER_CFG table=filter:114 family=2 entries=11 op=nft_register_rule pid=4839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:31.969000 audit[4839]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc356adb10 a2=0 a3=7ffc356adafc items=0 ppid=2968 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:31.972000 audit[4839]: NETFILTER_CFG table=nat:115 family=2 entries=35 op=nft_register_chain pid=4839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:31.972000 audit[4839]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc356adb10 a2=0 a3=7ffc356adafc items=0 ppid=2968 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:31.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:31.990641 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:31.990754 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali244ab3f2626: link becomes ready Jun 25 16:27:31.992384 systemd-networkd[1234]: cali244ab3f2626: Gained carrier Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.801 [INFO][4808] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0 coredns-5dd5756b68- kube-system 75c7c79a-d44c-47ce-a93f-c54170ddf76b 828 0 2024-06-25 16:26:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c coredns-5dd5756b68-d6rxn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali244ab3f2626 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.801 [INFO][4808] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.879 [INFO][4829] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" HandleID="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.931 [INFO][4829] ipam_plugin.go 264: Auto assigning IP ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" HandleID="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003120b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"coredns-5dd5756b68-d6rxn", "timestamp":"2024-06-25 16:27:31.878810392 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.932 [INFO][4829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.932 [INFO][4829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.932 [INFO][4829] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.934 [INFO][4829] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.938 [INFO][4829] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.942 [INFO][4829] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.944 [INFO][4829] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.946 [INFO][4829] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.947 [INFO][4829] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.948 [INFO][4829] ipam.go 1685: Creating new handle: k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610 Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.953 [INFO][4829] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.961 [INFO][4829] ipam.go 1216: Successfully claimed IPs: [192.168.52.196/26] block=192.168.52.192/26 handle="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.961 [INFO][4829] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.196/26] handle="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.961 [INFO][4829] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:31.999338 containerd[1477]: 2024-06-25 16:27:31.961 [INFO][4829] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.196/26] IPv6=[] ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" HandleID="k8s-pod-network.1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.962 [INFO][4808] k8s.go 386: Populated endpoint ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"75c7c79a-d44c-47ce-a93f-c54170ddf76b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"coredns-5dd5756b68-d6rxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244ab3f2626", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.963 [INFO][4808] k8s.go 387: Calico CNI using IPs: [192.168.52.196/32] ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.963 [INFO][4808] dataplane_linux.go 68: Setting the host side veth name to cali244ab3f2626 ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.967 [INFO][4808] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.967 [INFO][4808] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"75c7c79a-d44c-47ce-a93f-c54170ddf76b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610", Pod:"coredns-5dd5756b68-d6rxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244ab3f2626", MAC:"ee:8f:38:43:04:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:32.000306 containerd[1477]: 2024-06-25 16:27:31.993 [INFO][4808] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610" Namespace="kube-system" Pod="coredns-5dd5756b68-d6rxn" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:27:32.038000 audit[4855]: NETFILTER_CFG table=filter:116 family=2 entries=38 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:32.040366 systemd-networkd[1234]: califccca6ac0c0: Gained IPv6LL Jun 25 16:27:32.038000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7fffb0abb7e0 a2=0 a3=7fffb0abb7cc items=0 ppid=4238 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:32.077728 containerd[1477]: time="2024-06-25T16:27:32.077420052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:32.077728 containerd[1477]: time="2024-06-25T16:27:32.077473351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:32.077728 containerd[1477]: time="2024-06-25T16:27:32.077494750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:32.077728 containerd[1477]: time="2024-06-25T16:27:32.077508850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:32.105783 systemd[1]: Started cri-containerd-1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610.scope - libcontainer container 1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610. Jun 25 16:27:32.122000 audit: BPF prog-id=201 op=LOAD Jun 25 16:27:32.122000 audit: BPF prog-id=202 op=LOAD Jun 25 16:27:32.122000 audit[4874]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4864 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435383833653364353535356433353866343664383935323735 Jun 25 16:27:32.122000 audit: BPF prog-id=203 op=LOAD Jun 25 16:27:32.122000 audit[4874]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4864 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435383833653364353535356433353866343664383935323735 Jun 25 16:27:32.123000 audit: BPF prog-id=203 op=UNLOAD Jun 25 16:27:32.123000 audit: BPF prog-id=202 op=UNLOAD Jun 25 16:27:32.123000 audit: BPF prog-id=204 op=LOAD Jun 25 16:27:32.123000 audit[4874]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4864 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435383833653364353535356433353866343664383935323735 Jun 25 16:27:32.137325 containerd[1477]: time="2024-06-25T16:27:32.137283371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.140401 containerd[1477]: time="2024-06-25T16:27:32.140338310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:27:32.144204 containerd[1477]: time="2024-06-25T16:27:32.143849941Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.148916 containerd[1477]: time="2024-06-25T16:27:32.148885142Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.152769 containerd[1477]: time="2024-06-25T16:27:32.152737466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.154154 containerd[1477]: time="2024-06-25T16:27:32.154083539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.193612153s" Jun 25 16:27:32.154310 containerd[1477]: time="2024-06-25T16:27:32.154279535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:27:32.156736 containerd[1477]: time="2024-06-25T16:27:32.156706387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:27:32.158118 containerd[1477]: time="2024-06-25T16:27:32.157958163Z" level=info msg="CreateContainer within sandbox \"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:27:32.164899 containerd[1477]: time="2024-06-25T16:27:32.164861326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d6rxn,Uid:75c7c79a-d44c-47ce-a93f-c54170ddf76b,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610\"" Jun 25 16:27:32.169570 containerd[1477]: time="2024-06-25T16:27:32.169515335Z" level=info msg="CreateContainer within sandbox \"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:27:32.249745 containerd[1477]: time="2024-06-25T16:27:32.249685553Z" level=info msg="CreateContainer within sandbox \"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7101c65b0c40a29712cfc92ce3e8aab26d4cd12be28c46ba41018dccf832c008\"" Jun 25 16:27:32.251433 containerd[1477]: time="2024-06-25T16:27:32.250408039Z" level=info msg="StartContainer for \"7101c65b0c40a29712cfc92ce3e8aab26d4cd12be28c46ba41018dccf832c008\"" Jun 25 16:27:32.253234 containerd[1477]: time="2024-06-25T16:27:32.253083686Z" level=info msg="CreateContainer within sandbox \"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72afc6966bdaaeca27166fb3587b7f83eb3b0924c792c740a9b9a90ed7dd0204\"" Jun 25 16:27:32.256325 containerd[1477]: time="2024-06-25T16:27:32.256273123Z" level=info msg="StartContainer for \"72afc6966bdaaeca27166fb3587b7f83eb3b0924c792c740a9b9a90ed7dd0204\"" Jun 25 16:27:32.286806 systemd[1]: Started cri-containerd-7101c65b0c40a29712cfc92ce3e8aab26d4cd12be28c46ba41018dccf832c008.scope - libcontainer container 7101c65b0c40a29712cfc92ce3e8aab26d4cd12be28c46ba41018dccf832c008. Jun 25 16:27:32.290340 systemd[1]: Started cri-containerd-72afc6966bdaaeca27166fb3587b7f83eb3b0924c792c740a9b9a90ed7dd0204.scope - libcontainer container 72afc6966bdaaeca27166fb3587b7f83eb3b0924c792c740a9b9a90ed7dd0204. Jun 25 16:27:32.296683 systemd-networkd[1234]: cali715d7d8a873: Gained IPv6LL Jun 25 16:27:32.305000 audit: BPF prog-id=205 op=LOAD Jun 25 16:27:32.306000 audit: BPF prog-id=206 op=LOAD Jun 25 16:27:32.306000 audit[4924]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4864 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732616663363936366264616165636132373136366662333538376237 Jun 25 16:27:32.306000 audit: BPF prog-id=207 op=LOAD Jun 25 16:27:32.306000 audit[4924]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4864 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732616663363936366264616165636132373136366662333538376237 Jun 25 16:27:32.306000 audit: BPF prog-id=207 op=UNLOAD Jun 25 16:27:32.306000 audit: BPF prog-id=206 op=UNLOAD Jun 25 16:27:32.306000 audit: BPF prog-id=208 op=LOAD Jun 25 16:27:32.306000 audit[4924]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4864 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732616663363936366264616165636132373136366662333538376237 Jun 25 16:27:32.310000 audit: BPF prog-id=209 op=LOAD Jun 25 16:27:32.310000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4486 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303163363562306334306132393731326366633932636533653861 Jun 25 16:27:32.310000 audit: BPF prog-id=210 op=LOAD Jun 25 16:27:32.310000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4486 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303163363562306334306132393731326366633932636533653861 Jun 25 16:27:32.310000 audit: BPF prog-id=210 op=UNLOAD Jun 25 16:27:32.310000 audit: BPF prog-id=209 op=UNLOAD Jun 25 16:27:32.310000 audit: BPF prog-id=211 op=LOAD Jun 25 16:27:32.310000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4486 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303163363562306334306132393731326366633932636533653861 Jun 25 16:27:32.342688 containerd[1477]: time="2024-06-25T16:27:32.342636819Z" level=info msg="StartContainer for \"7101c65b0c40a29712cfc92ce3e8aab26d4cd12be28c46ba41018dccf832c008\" returns successfully" Jun 25 16:27:32.342881 containerd[1477]: time="2024-06-25T16:27:32.342636919Z" level=info msg="StartContainer for \"72afc6966bdaaeca27166fb3587b7f83eb3b0924c792c740a9b9a90ed7dd0204\" returns successfully" Jun 25 16:27:32.854635 kubelet[2831]: I0625 16:27:32.854580 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-d6rxn" podStartSLOduration=52.854538219 podCreationTimestamp="2024-06-25 16:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:32.853916031 +0000 UTC m=+65.826584508" watchObservedRunningTime="2024-06-25 16:27:32.854538219 +0000 UTC m=+65.827206696" Jun 25 16:27:32.866000 audit[4977]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:32.866000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7686ad20 a2=0 a3=7ffc7686ad0c items=0 ppid=2968 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:32.877000 audit[4977]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:32.877000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc7686ad20 a2=0 a3=7ffc7686ad0c items=0 ppid=2968 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:33.766848 systemd-networkd[1234]: cali244ab3f2626: Gained IPv6LL Jun 25 16:27:33.886000 audit[4979]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:33.886000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe49a6c860 a2=0 a3=7ffe49a6c84c items=0 ppid=2968 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:33.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:33.891000 audit[4979]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:33.891000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe49a6c860 a2=0 a3=7ffe49a6c84c items=0 ppid=2968 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:33.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:35.414810 containerd[1477]: time="2024-06-25T16:27:35.414760328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:35.418050 containerd[1477]: time="2024-06-25T16:27:35.417975266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:27:35.421253 containerd[1477]: time="2024-06-25T16:27:35.421213304Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:35.424296 containerd[1477]: time="2024-06-25T16:27:35.424263345Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:35.428437 containerd[1477]: time="2024-06-25T16:27:35.428405466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:35.429126 containerd[1477]: time="2024-06-25T16:27:35.429088653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.271096791s" Jun 25 16:27:35.429273 containerd[1477]: time="2024-06-25T16:27:35.429245649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:27:35.430409 containerd[1477]: time="2024-06-25T16:27:35.430382128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:27:35.450065 containerd[1477]: time="2024-06-25T16:27:35.450026250Z" level=info msg="CreateContainer within sandbox \"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:27:35.508040 containerd[1477]: time="2024-06-25T16:27:35.507981936Z" level=info msg="CreateContainer within sandbox \"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e\"" Jun 25 16:27:35.508843 containerd[1477]: time="2024-06-25T16:27:35.508801820Z" level=info msg="StartContainer for \"a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e\"" Jun 25 16:27:35.538780 systemd[1]: Started cri-containerd-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e.scope - libcontainer container a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e. Jun 25 16:27:35.549000 audit: BPF prog-id=212 op=LOAD Jun 25 16:27:35.552817 kernel: kauditd_printk_skb: 109 callbacks suppressed Jun 25 16:27:35.552930 kernel: audit: type=1334 audit(1719332855.549:636): prog-id=212 op=LOAD Jun 25 16:27:35.549000 audit: BPF prog-id=213 op=LOAD Jun 25 16:27:35.558965 kernel: audit: type=1334 audit(1719332855.549:637): prog-id=213 op=LOAD Jun 25 16:27:35.549000 audit[4994]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4636 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.569054 kernel: audit: type=1300 audit(1719332855.549:637): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4636 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136336637346339333363396664333530653635393763393632656362 Jun 25 16:27:35.579649 kernel: audit: type=1327 audit(1719332855.549:637): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136336637346339333363396664333530653635393763393632656362 Jun 25 16:27:35.584016 kernel: audit: type=1334 audit(1719332855.549:638): prog-id=214 op=LOAD Jun 25 16:27:35.549000 audit: BPF prog-id=214 op=LOAD Jun 25 16:27:35.595845 kernel: audit: type=1300 audit(1719332855.549:638): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4636 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.549000 audit[4994]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4636 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136336637346339333363396664333530653635393763393632656362 Jun 25 16:27:35.606280 kernel: audit: type=1327 audit(1719332855.549:638): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136336637346339333363396664333530653635393763393632656362 Jun 25 16:27:35.549000 audit: BPF prog-id=214 op=UNLOAD Jun 25 16:27:35.611722 kernel: audit: type=1334 audit(1719332855.549:639): prog-id=214 op=UNLOAD Jun 25 16:27:35.549000 audit: BPF prog-id=213 op=UNLOAD Jun 25 16:27:35.618619 kernel: audit: type=1334 audit(1719332855.549:640): prog-id=213 op=UNLOAD Jun 25 16:27:35.618703 kernel: audit: type=1334 audit(1719332855.549:641): prog-id=215 op=LOAD Jun 25 16:27:35.549000 audit: BPF prog-id=215 op=LOAD Jun 25 16:27:35.549000 audit[4994]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4636 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136336637346339333363396664333530653635393763393632656362 Jun 25 16:27:35.627856 containerd[1477]: time="2024-06-25T16:27:35.627808132Z" level=info msg="StartContainer for \"a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e\" returns successfully" Jun 25 16:27:35.870058 kubelet[2831]: I0625 16:27:35.868540 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58cc6dbf49-rlrpx" podStartSLOduration=41.70669902 podCreationTimestamp="2024-06-25 16:26:50 +0000 UTC" firstStartedPulling="2024-06-25 16:27:31.267935255 +0000 UTC m=+64.240603732" lastFinishedPulling="2024-06-25 16:27:35.42972634 +0000 UTC m=+68.402394917" observedRunningTime="2024-06-25 16:27:35.866840136 +0000 UTC m=+68.839508613" watchObservedRunningTime="2024-06-25 16:27:35.868490205 +0000 UTC m=+68.841158982" Jun 25 16:27:37.860275 containerd[1477]: time="2024-06-25T16:27:37.860221537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:37.862044 containerd[1477]: time="2024-06-25T16:27:37.861987403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:27:37.866088 containerd[1477]: time="2024-06-25T16:27:37.866052226Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:37.869615 containerd[1477]: time="2024-06-25T16:27:37.869567360Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:37.873108 containerd[1477]: time="2024-06-25T16:27:37.873075793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:37.873743 containerd[1477]: time="2024-06-25T16:27:37.873701082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.443161357s" Jun 25 16:27:37.873838 containerd[1477]: time="2024-06-25T16:27:37.873752281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:27:37.876041 containerd[1477]: time="2024-06-25T16:27:37.876008238Z" level=info msg="CreateContainer within sandbox \"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:27:37.920846 containerd[1477]: time="2024-06-25T16:27:37.920793491Z" level=info msg="CreateContainer within sandbox \"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5087e9c150effa82efa5359aa37f8fe0c939c11203a8901cb863d251aa8dae36\"" Jun 25 16:27:37.921480 containerd[1477]: time="2024-06-25T16:27:37.921450678Z" level=info msg="StartContainer for \"5087e9c150effa82efa5359aa37f8fe0c939c11203a8901cb863d251aa8dae36\"" Jun 25 16:27:37.952792 systemd[1]: Started cri-containerd-5087e9c150effa82efa5359aa37f8fe0c939c11203a8901cb863d251aa8dae36.scope - libcontainer container 5087e9c150effa82efa5359aa37f8fe0c939c11203a8901cb863d251aa8dae36. Jun 25 16:27:37.965000 audit: BPF prog-id=216 op=LOAD Jun 25 16:27:37.965000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4486 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:37.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383765396331353065666661383265666135333539616133376638 Jun 25 16:27:37.965000 audit: BPF prog-id=217 op=LOAD Jun 25 16:27:37.965000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4486 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:37.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383765396331353065666661383265666135333539616133376638 Jun 25 16:27:37.965000 audit: BPF prog-id=217 op=UNLOAD Jun 25 16:27:37.965000 audit: BPF prog-id=216 op=UNLOAD Jun 25 16:27:37.965000 audit: BPF prog-id=218 op=LOAD Jun 25 16:27:37.965000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4486 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:37.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383765396331353065666661383265666135333539616133376638 Jun 25 16:27:37.991652 containerd[1477]: time="2024-06-25T16:27:37.991603551Z" level=info msg="StartContainer for \"5087e9c150effa82efa5359aa37f8fe0c939c11203a8901cb863d251aa8dae36\" returns successfully" Jun 25 16:27:38.698077 kubelet[2831]: I0625 16:27:38.698041 2831 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:27:38.698077 kubelet[2831]: I0625 16:27:38.698082 2831 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:27:38.786229 kubelet[2831]: I0625 16:27:38.786186 2831 topology_manager.go:215] "Topology Admit Handler" podUID="56878772-a139-4849-8bad-729ec9d337d2" podNamespace="calico-apiserver" podName="calico-apiserver-55c9858cc-fk8lb" Jun 25 16:27:38.793295 systemd[1]: Created slice kubepods-besteffort-pod56878772_a139_4849_8bad_729ec9d337d2.slice - libcontainer container kubepods-besteffort-pod56878772_a139_4849_8bad_729ec9d337d2.slice. Jun 25 16:27:38.816445 kubelet[2831]: I0625 16:27:38.815882 2831 topology_manager.go:215] "Topology Admit Handler" podUID="afbb62f5-781c-4558-a8bc-4cc594bb1bf6" podNamespace="calico-apiserver" podName="calico-apiserver-55c9858cc-wmrfc" Jun 25 16:27:38.822666 systemd[1]: Created slice kubepods-besteffort-podafbb62f5_781c_4558_a8bc_4cc594bb1bf6.slice - libcontainer container kubepods-besteffort-podafbb62f5_781c_4558_a8bc_4cc594bb1bf6.slice. Jun 25 16:27:38.832000 audit[5101]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=5101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:38.832000 audit[5101]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdf593c6c0 a2=0 a3=7ffdf593c6ac items=0 ppid=2968 pid=5101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:38.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:38.834000 audit[5101]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=5101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:38.834000 audit[5101]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf593c6c0 a2=0 a3=7ffdf593c6ac items=0 ppid=2968 pid=5101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:38.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:38.856000 audit[5103]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:38.856000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffef4809f90 a2=0 a3=7ffef4809f7c items=0 ppid=2968 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:38.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:38.858000 audit[5103]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:38.858000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef4809f90 a2=0 a3=7ffef4809f7c items=0 ppid=2968 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:38.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:38.880011 kubelet[2831]: I0625 16:27:38.879954 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-prpxb" podStartSLOduration=41.964132535 podCreationTimestamp="2024-06-25 16:26:49 +0000 UTC" firstStartedPulling="2024-06-25 16:27:29.95830963 +0000 UTC m=+62.930978107" lastFinishedPulling="2024-06-25 16:27:37.874080574 +0000 UTC m=+70.846749051" observedRunningTime="2024-06-25 16:27:38.878488306 +0000 UTC m=+71.851156783" watchObservedRunningTime="2024-06-25 16:27:38.879903479 +0000 UTC m=+71.852571956" Jun 25 16:27:38.898035 kubelet[2831]: I0625 16:27:38.897990 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56878772-a139-4849-8bad-729ec9d337d2-calico-apiserver-certs\") pod \"calico-apiserver-55c9858cc-fk8lb\" (UID: \"56878772-a139-4849-8bad-729ec9d337d2\") " pod="calico-apiserver/calico-apiserver-55c9858cc-fk8lb" Jun 25 16:27:38.898243 kubelet[2831]: I0625 16:27:38.898050 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzjz\" (UniqueName: \"kubernetes.io/projected/56878772-a139-4849-8bad-729ec9d337d2-kube-api-access-kbzjz\") pod \"calico-apiserver-55c9858cc-fk8lb\" (UID: \"56878772-a139-4849-8bad-729ec9d337d2\") " pod="calico-apiserver/calico-apiserver-55c9858cc-fk8lb" Jun 25 16:27:38.999026 kubelet[2831]: I0625 16:27:38.998888 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/afbb62f5-781c-4558-a8bc-4cc594bb1bf6-calico-apiserver-certs\") pod \"calico-apiserver-55c9858cc-wmrfc\" (UID: \"afbb62f5-781c-4558-a8bc-4cc594bb1bf6\") " pod="calico-apiserver/calico-apiserver-55c9858cc-wmrfc" Jun 25 16:27:38.999026 kubelet[2831]: I0625 16:27:38.998953 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k926g\" (UniqueName: \"kubernetes.io/projected/afbb62f5-781c-4558-a8bc-4cc594bb1bf6-kube-api-access-k926g\") pod \"calico-apiserver-55c9858cc-wmrfc\" (UID: \"afbb62f5-781c-4558-a8bc-4cc594bb1bf6\") " pod="calico-apiserver/calico-apiserver-55c9858cc-wmrfc" Jun 25 16:27:38.999415 kubelet[2831]: E0625 16:27:38.999393 2831 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:27:38.999637 kubelet[2831]: E0625 16:27:38.999621 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56878772-a139-4849-8bad-729ec9d337d2-calico-apiserver-certs podName:56878772-a139-4849-8bad-729ec9d337d2 nodeName:}" failed. No retries permitted until 2024-06-25 16:27:39.499565534 +0000 UTC m=+72.472234011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/56878772-a139-4849-8bad-729ec9d337d2-calico-apiserver-certs") pod "calico-apiserver-55c9858cc-fk8lb" (UID: "56878772-a139-4849-8bad-729ec9d337d2") : secret "calico-apiserver-certs" not found Jun 25 16:27:39.128340 containerd[1477]: time="2024-06-25T16:27:39.127839744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c9858cc-wmrfc,Uid:afbb62f5-781c-4558-a8bc-4cc594bb1bf6,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:27:39.276211 systemd-networkd[1234]: cali4e933993241: Link UP Jun 25 16:27:39.277703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:39.277790 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4e933993241: link becomes ready Jun 25 16:27:39.281905 systemd-networkd[1234]: cali4e933993241: Gained carrier Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.212 [INFO][5107] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0 calico-apiserver-55c9858cc- calico-apiserver afbb62f5-781c-4558-a8bc-4cc594bb1bf6 936 0 2024-06-25 16:27:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c9858cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c calico-apiserver-55c9858cc-wmrfc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e933993241 [] []}} ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.213 [INFO][5107] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.240 [INFO][5118] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" HandleID="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.249 [INFO][5118] ipam_plugin.go 264: Auto assigning IP ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" HandleID="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003785c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"calico-apiserver-55c9858cc-wmrfc", "timestamp":"2024-06-25 16:27:39.240947138 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.249 [INFO][5118] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.249 [INFO][5118] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.249 [INFO][5118] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.250 [INFO][5118] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.254 [INFO][5118] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.257 [INFO][5118] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.259 [INFO][5118] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.261 [INFO][5118] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.261 [INFO][5118] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.262 [INFO][5118] ipam.go 1685: Creating new handle: k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.265 [INFO][5118] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.270 [INFO][5118] ipam.go 1216: Successfully claimed IPs: [192.168.52.197/26] block=192.168.52.192/26 handle="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.270 [INFO][5118] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.197/26] handle="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.270 [INFO][5118] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:39.298995 containerd[1477]: 2024-06-25 16:27:39.270 [INFO][5118] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.197/26] IPv6=[] ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" HandleID="k8s-pod-network.18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.272 [INFO][5107] k8s.go 386: Populated endpoint ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0", GenerateName:"calico-apiserver-55c9858cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"afbb62f5-781c-4558-a8bc-4cc594bb1bf6", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c9858cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"calico-apiserver-55c9858cc-wmrfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e933993241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.272 [INFO][5107] k8s.go 387: Calico CNI using IPs: [192.168.52.197/32] ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.272 [INFO][5107] dataplane_linux.go 68: Setting the host side veth name to cali4e933993241 ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.282 [INFO][5107] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.282 [INFO][5107] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0", GenerateName:"calico-apiserver-55c9858cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"afbb62f5-781c-4558-a8bc-4cc594bb1bf6", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c9858cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a", Pod:"calico-apiserver-55c9858cc-wmrfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e933993241", MAC:"36:ab:2c:7c:bb:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:39.300025 containerd[1477]: 2024-06-25 16:27:39.293 [INFO][5107] k8s.go 500: Wrote updated endpoint to datastore ContainerID="18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-wmrfc" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--wmrfc-eth0" Jun 25 16:27:39.326278 containerd[1477]: time="2024-06-25T16:27:39.326172751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:39.326536 containerd[1477]: time="2024-06-25T16:27:39.326501545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:39.326712 containerd[1477]: time="2024-06-25T16:27:39.326683241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:39.326843 containerd[1477]: time="2024-06-25T16:27:39.326819139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:39.325000 audit[5153]: NETFILTER_CFG table=filter:125 family=2 entries=55 op=nft_register_chain pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:39.325000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffd56e70100 a2=0 a3=7ffd56e700ec items=0 ppid=4238 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.325000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:39.348784 systemd[1]: Started cri-containerd-18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a.scope - libcontainer container 18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a. Jun 25 16:27:39.357000 audit: BPF prog-id=219 op=LOAD Jun 25 16:27:39.357000 audit: BPF prog-id=220 op=LOAD Jun 25 16:27:39.357000 audit[5160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=5148 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646662373865636532383164316130643436613461363233313130 Jun 25 16:27:39.358000 audit: BPF prog-id=221 op=LOAD Jun 25 16:27:39.358000 audit[5160]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=5148 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646662373865636532383164316130643436613461363233313130 Jun 25 16:27:39.358000 audit: BPF prog-id=221 op=UNLOAD Jun 25 16:27:39.358000 audit: BPF prog-id=220 op=UNLOAD Jun 25 16:27:39.358000 audit: BPF prog-id=222 op=LOAD Jun 25 16:27:39.358000 audit[5160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=5148 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646662373865636532383164316130643436613461363233313130 Jun 25 16:27:39.395077 containerd[1477]: time="2024-06-25T16:27:39.394987769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c9858cc-wmrfc,Uid:afbb62f5-781c-4558-a8bc-4cc594bb1bf6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a\"" Jun 25 16:27:39.397218 containerd[1477]: time="2024-06-25T16:27:39.396794036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:27:39.697268 containerd[1477]: time="2024-06-25T16:27:39.697195341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c9858cc-fk8lb,Uid:56878772-a139-4849-8bad-729ec9d337d2,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:27:39.901227 systemd-networkd[1234]: cali21ac0f199bc: Link UP Jun 25 16:27:39.905695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali21ac0f199bc: link becomes ready Jun 25 16:27:39.905511 systemd-networkd[1234]: cali21ac0f199bc: Gained carrier Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.806 [INFO][5186] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0 calico-apiserver-55c9858cc- calico-apiserver 56878772-a139-4849-8bad-729ec9d337d2 932 0 2024-06-25 16:27:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c9858cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-a46e2cd05c calico-apiserver-55c9858cc-fk8lb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21ac0f199bc [] []}} ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.806 [INFO][5186] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.843 [INFO][5196] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" HandleID="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.852 [INFO][5196] ipam_plugin.go 264: Auto assigning IP ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" HandleID="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029a570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-a46e2cd05c", "pod":"calico-apiserver-55c9858cc-fk8lb", "timestamp":"2024-06-25 16:27:39.843419218 +0000 UTC"}, Hostname:"ci-3815.2.4-a-a46e2cd05c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.852 [INFO][5196] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.852 [INFO][5196] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.852 [INFO][5196] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-a46e2cd05c' Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.854 [INFO][5196] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.856 [INFO][5196] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.859 [INFO][5196] ipam.go 489: Trying affinity for 192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.861 [INFO][5196] ipam.go 155: Attempting to load block cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.868 [INFO][5196] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.868 [INFO][5196] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.873 [INFO][5196] ipam.go 1685: Creating new handle: k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0 Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.881 [INFO][5196] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.896 [INFO][5196] ipam.go 1216: Successfully claimed IPs: [192.168.52.198/26] block=192.168.52.192/26 handle="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.896 [INFO][5196] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.198/26] handle="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" host="ci-3815.2.4-a-a46e2cd05c" Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.897 [INFO][5196] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:39.927069 containerd[1477]: 2024-06-25 16:27:39.897 [INFO][5196] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.52.198/26] IPv6=[] ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" HandleID="k8s-pod-network.b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.898 [INFO][5186] k8s.go 386: Populated endpoint ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0", GenerateName:"calico-apiserver-55c9858cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"56878772-a139-4849-8bad-729ec9d337d2", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c9858cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"", Pod:"calico-apiserver-55c9858cc-fk8lb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ac0f199bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.898 [INFO][5186] k8s.go 387: Calico CNI using IPs: [192.168.52.198/32] ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.898 [INFO][5186] dataplane_linux.go 68: Setting the host side veth name to cali21ac0f199bc ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.906 [INFO][5186] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.906 [INFO][5186] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0", GenerateName:"calico-apiserver-55c9858cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"56878772-a139-4849-8bad-729ec9d337d2", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c9858cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0", Pod:"calico-apiserver-55c9858cc-fk8lb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ac0f199bc", MAC:"2a:3b:ec:60:c7:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:39.928044 containerd[1477]: 2024-06-25 16:27:39.925 [INFO][5186] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0" Namespace="calico-apiserver" Pod="calico-apiserver-55c9858cc-fk8lb" WorkloadEndpoint="ci--3815.2.4--a--a46e2cd05c-k8s-calico--apiserver--55c9858cc--fk8lb-eth0" Jun 25 16:27:39.982750 containerd[1477]: time="2024-06-25T16:27:39.982081835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:39.982750 containerd[1477]: time="2024-06-25T16:27:39.982144234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:39.982750 containerd[1477]: time="2024-06-25T16:27:39.982169334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:39.982750 containerd[1477]: time="2024-06-25T16:27:39.982187234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:40.011821 systemd[1]: Started cri-containerd-b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0.scope - libcontainer container b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0. Jun 25 16:27:40.053000 audit: BPF prog-id=223 op=LOAD Jun 25 16:27:40.053000 audit: BPF prog-id=224 op=LOAD Jun 25 16:27:40.053000 audit[5232]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5220 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626261383665353564336166633837303462336337663964653664 Jun 25 16:27:40.054000 audit: BPF prog-id=225 op=LOAD Jun 25 16:27:40.054000 audit[5232]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5220 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626261383665353564336166633837303462336337663964653664 Jun 25 16:27:40.054000 audit: BPF prog-id=225 op=UNLOAD Jun 25 16:27:40.054000 audit: BPF prog-id=224 op=UNLOAD Jun 25 16:27:40.054000 audit: BPF prog-id=226 op=LOAD Jun 25 16:27:40.054000 audit[5232]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=5220 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626261383665353564336166633837303462336337663964653664 Jun 25 16:27:40.090000 audit[5251]: NETFILTER_CFG table=filter:126 family=2 entries=55 op=nft_register_chain pid=5251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:40.094434 containerd[1477]: time="2024-06-25T16:27:40.094389957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c9858cc-fk8lb,Uid:56878772-a139-4849-8bad-729ec9d337d2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0\"" Jun 25 16:27:40.090000 audit[5251]: SYSCALL arch=c000003e syscall=46 success=yes exit=27152 a0=3 a1=7ffcef64cc50 a2=0 a3=7ffcef64cc3c items=0 ppid=4238 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:41.063546 systemd-networkd[1234]: cali4e933993241: Gained IPv6LL Jun 25 16:27:41.830860 systemd-networkd[1234]: cali21ac0f199bc: Gained IPv6LL Jun 25 16:27:42.791790 containerd[1477]: time="2024-06-25T16:27:42.791732742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:42.793619 containerd[1477]: time="2024-06-25T16:27:42.793553809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:27:42.797559 containerd[1477]: time="2024-06-25T16:27:42.797527236Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:42.803124 containerd[1477]: time="2024-06-25T16:27:42.803087635Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:42.806745 containerd[1477]: time="2024-06-25T16:27:42.806714569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:42.807432 containerd[1477]: time="2024-06-25T16:27:42.807394757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.410554622s" Jun 25 16:27:42.807583 containerd[1477]: time="2024-06-25T16:27:42.807556754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:27:42.808966 containerd[1477]: time="2024-06-25T16:27:42.808938329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:27:42.810313 containerd[1477]: time="2024-06-25T16:27:42.810280604Z" level=info msg="CreateContainer within sandbox \"18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:27:42.857235 containerd[1477]: time="2024-06-25T16:27:42.857181850Z" level=info msg="CreateContainer within sandbox \"18dfb78ece281d1a0d46a4a6231101ef2e1bc5cb1435a22adb5e3e18dc458d9a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe\"" Jun 25 16:27:42.857893 containerd[1477]: time="2024-06-25T16:27:42.857785039Z" level=info msg="StartContainer for \"1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe\"" Jun 25 16:27:42.897831 systemd[1]: run-containerd-runc-k8s.io-1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe-runc.EvxvUl.mount: Deactivated successfully. Jun 25 16:27:42.901959 systemd[1]: Started cri-containerd-1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe.scope - libcontainer container 1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe. Jun 25 16:27:42.911000 audit: BPF prog-id=227 op=LOAD Jun 25 16:27:42.914996 kernel: kauditd_printk_skb: 55 callbacks suppressed Jun 25 16:27:42.915090 kernel: audit: type=1334 audit(1719332862.911:665): prog-id=227 op=LOAD Jun 25 16:27:42.911000 audit: BPF prog-id=228 op=LOAD Jun 25 16:27:42.920818 kernel: audit: type=1334 audit(1719332862.911:666): prog-id=228 op=LOAD Jun 25 16:27:42.911000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5148 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.930947 kernel: audit: type=1300 audit(1719332862.911:666): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5148 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164343532333333323235373738646365343165613134356463633335 Jun 25 16:27:42.944170 kernel: audit: type=1327 audit(1719332862.911:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164343532333333323235373738646365343165613134356463633335 Jun 25 16:27:42.948513 kernel: audit: type=1334 audit(1719332862.911:667): prog-id=229 op=LOAD Jun 25 16:27:42.911000 audit: BPF prog-id=229 op=LOAD Jun 25 16:27:42.911000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5148 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.961745 kernel: audit: type=1300 audit(1719332862.911:667): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5148 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164343532333333323235373738646365343165613134356463633335 Jun 25 16:27:42.973639 kernel: audit: type=1327 audit(1719332862.911:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164343532333333323235373738646365343165613134356463633335 Jun 25 16:27:42.911000 audit: BPF prog-id=229 op=UNLOAD Jun 25 16:27:42.979558 kernel: audit: type=1334 audit(1719332862.911:668): prog-id=229 op=UNLOAD Jun 25 16:27:42.911000 audit: BPF prog-id=228 op=UNLOAD Jun 25 16:27:42.911000 audit: BPF prog-id=230 op=LOAD Jun 25 16:27:42.986001 kernel: audit: type=1334 audit(1719332862.911:669): prog-id=228 op=UNLOAD Jun 25 16:27:42.986067 kernel: audit: type=1334 audit(1719332862.911:670): prog-id=230 op=LOAD Jun 25 16:27:42.911000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=5148 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164343532333333323235373738646365343165613134356463633335 Jun 25 16:27:42.998438 containerd[1477]: time="2024-06-25T16:27:42.998383077Z" level=info msg="StartContainer for \"1d452333225778dce41ea145dcc35b1ce68c7b8d5a7b6aa55797b2fad1d53fbe\" returns successfully" Jun 25 16:27:43.257889 containerd[1477]: time="2024-06-25T16:27:43.257832082Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:43.259885 containerd[1477]: time="2024-06-25T16:27:43.259822046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:27:43.263913 containerd[1477]: time="2024-06-25T16:27:43.263876773Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:43.268716 containerd[1477]: time="2024-06-25T16:27:43.268684286Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:43.275054 containerd[1477]: time="2024-06-25T16:27:43.275014372Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:43.276194 containerd[1477]: time="2024-06-25T16:27:43.276151351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 467.071225ms" Jun 25 16:27:43.276301 containerd[1477]: time="2024-06-25T16:27:43.276201650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:27:43.279813 containerd[1477]: time="2024-06-25T16:27:43.279783585Z" level=info msg="CreateContainer within sandbox \"b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:27:43.337007 containerd[1477]: time="2024-06-25T16:27:43.336952251Z" level=info msg="CreateContainer within sandbox \"b0bba86e55d3afc8704b3c7f9de6d35cb0ce3e949640c680c980602179c3b3f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9976cacc745206a44279e9339316ddfc978cc4740b0ccd52c291840cbfdf8a5d\"" Jun 25 16:27:43.338245 containerd[1477]: time="2024-06-25T16:27:43.338206628Z" level=info msg="StartContainer for \"9976cacc745206a44279e9339316ddfc978cc4740b0ccd52c291840cbfdf8a5d\"" Jun 25 16:27:43.369786 systemd[1]: Started cri-containerd-9976cacc745206a44279e9339316ddfc978cc4740b0ccd52c291840cbfdf8a5d.scope - libcontainer container 9976cacc745206a44279e9339316ddfc978cc4740b0ccd52c291840cbfdf8a5d. Jun 25 16:27:43.386000 audit: BPF prog-id=231 op=LOAD Jun 25 16:27:43.387000 audit: BPF prog-id=232 op=LOAD Jun 25 16:27:43.387000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5220 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939373663616363373435323036613434323739653933333933313664 Jun 25 16:27:43.387000 audit: BPF prog-id=233 op=LOAD Jun 25 16:27:43.387000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5220 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939373663616363373435323036613434323739653933333933313664 Jun 25 16:27:43.387000 audit: BPF prog-id=233 op=UNLOAD Jun 25 16:27:43.387000 audit: BPF prog-id=232 op=UNLOAD Jun 25 16:27:43.387000 audit: BPF prog-id=234 op=LOAD Jun 25 16:27:43.387000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5220 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939373663616363373435323036613434323739653933333933313664 Jun 25 16:27:43.432910 containerd[1477]: time="2024-06-25T16:27:43.432856316Z" level=info msg="StartContainer for \"9976cacc745206a44279e9339316ddfc978cc4740b0ccd52c291840cbfdf8a5d\" returns successfully" Jun 25 16:27:43.916113 kubelet[2831]: I0625 16:27:43.916080 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55c9858cc-fk8lb" podStartSLOduration=2.735494457 podCreationTimestamp="2024-06-25 16:27:38 +0000 UTC" firstStartedPulling="2024-06-25 16:27:40.096010827 +0000 UTC m=+73.068679304" lastFinishedPulling="2024-06-25 16:27:43.276529544 +0000 UTC m=+76.249198021" observedRunningTime="2024-06-25 16:27:43.900013364 +0000 UTC m=+76.872681941" watchObservedRunningTime="2024-06-25 16:27:43.916013174 +0000 UTC m=+76.888681651" Jun 25 16:27:43.916809 kubelet[2831]: I0625 16:27:43.916788 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55c9858cc-wmrfc" podStartSLOduration=2.5049768610000003 podCreationTimestamp="2024-06-25 16:27:38 +0000 UTC" firstStartedPulling="2024-06-25 16:27:39.396286945 +0000 UTC m=+72.368955422" lastFinishedPulling="2024-06-25 16:27:42.808045345 +0000 UTC m=+75.780713822" observedRunningTime="2024-06-25 16:27:43.914320405 +0000 UTC m=+76.886988982" watchObservedRunningTime="2024-06-25 16:27:43.916735261 +0000 UTC m=+76.889403738" Jun 25 16:27:43.943000 audit[5344]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:43.943000 audit[5344]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffcc80e9e0 a2=0 a3=7fffcc80e9cc items=0 ppid=2968 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:43.945000 audit[5344]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:43.945000 audit[5344]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcc80e9e0 a2=0 a3=7fffcc80e9cc items=0 ppid=2968 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:43.954000 audit[5346]: NETFILTER_CFG table=filter:129 family=2 entries=9 op=nft_register_rule pid=5346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:43.954000 audit[5346]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe251b4620 a2=0 a3=7ffe251b460c items=0 ppid=2968 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:43.955000 audit[5346]: NETFILTER_CFG table=nat:130 family=2 entries=27 op=nft_register_chain pid=5346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:43.955000 audit[5346]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe251b4620 a2=0 a3=7ffe251b460c items=0 ppid=2968 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:44.333763 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.NN4rua.mount: Deactivated successfully. Jun 25 16:27:44.972000 audit[5368]: NETFILTER_CFG table=filter:131 family=2 entries=8 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:44.972000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe93fdce80 a2=0 a3=7ffe93fdce6c items=0 ppid=2968 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:44.975000 audit[5368]: NETFILTER_CFG table=nat:132 family=2 entries=34 op=nft_register_chain pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:44.975000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe93fdce80 a2=0 a3=7ffe93fdce6c items=0 ppid=2968 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:44.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:14.336157 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.d4bCny.mount: Deactivated successfully. Jun 25 16:28:23.340000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.343796 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:28:23.343947 kernel: audit: type=1400 audit(1719332903.340:684): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.340000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.364624 kernel: audit: type=1400 audit(1719332903.340:683): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.340000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001d7fda0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:23.377464 kernel: audit: type=1300 audit(1719332903.340:684): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001d7fda0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:23.340000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:23.388918 kernel: audit: type=1327 audit(1719332903.340:684): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:23.340000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001ce28a0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:23.401719 kernel: audit: type=1300 audit(1719332903.340:683): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001ce28a0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:23.404181 kernel: audit: type=1327 audit(1719332903.340:683): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:23.340000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.625631 kernel: audit: type=1400 audit(1719332903.601:685): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.601000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00d47bb00 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.652126 kernel: audit: type=1400 audit(1719332903.601:686): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.652292 kernel: audit: type=1300 audit(1719332903.601:685): arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00d47bb00 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.601000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00e7dac20 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.664869 kernel: audit: type=1300 audit(1719332903.601:686): arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00e7dac20 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:23.601000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:23.602000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.602000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00ce4a600 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.602000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:23.625000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.625000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00ce4a960 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.625000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:23.659000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.659000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c0055f28a0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.659000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:23.659000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:23.659000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00d47be00 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:28:23.659000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:28:26.235186 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.IPl7eV.mount: Deactivated successfully. Jun 25 16:28:27.040000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:27.040000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e4fa80 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:27.040000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:27.041000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:27.041000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000b2ac40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:27.041000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00114e180 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000b2ac80 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:28:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:27.629471 containerd[1477]: time="2024-06-25T16:28:27.629422352Z" level=info msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.664 [WARNING][5488] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0", GenerateName:"calico-kube-controllers-58cc6dbf49-", Namespace:"calico-system", SelfLink:"", UID:"6397532b-83a6-4d2d-bcc3-8908e6d508d3", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc6dbf49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7", Pod:"calico-kube-controllers-58cc6dbf49-rlrpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califccca6ac0c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.664 [INFO][5488] k8s.go 608: Cleaning up netns ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.664 [INFO][5488] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" iface="eth0" netns="" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.664 [INFO][5488] k8s.go 615: Releasing IP address(es) ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.664 [INFO][5488] utils.go 188: Calico CNI releasing IP address ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.684 [INFO][5494] ipam_plugin.go 411: Releasing address using handleID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.684 [INFO][5494] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.684 [INFO][5494] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.690 [WARNING][5494] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.690 [INFO][5494] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.691 [INFO][5494] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.693761 containerd[1477]: 2024-06-25 16:28:27.692 [INFO][5488] k8s.go 621: Teardown processing complete. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.694869 containerd[1477]: time="2024-06-25T16:28:27.693803725Z" level=info msg="TearDown network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" successfully" Jun 25 16:28:27.694869 containerd[1477]: time="2024-06-25T16:28:27.693841828Z" level=info msg="StopPodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" returns successfully" Jun 25 16:28:27.694869 containerd[1477]: time="2024-06-25T16:28:27.694288867Z" level=info msg="RemovePodSandbox for \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" Jun 25 16:28:27.694869 containerd[1477]: time="2024-06-25T16:28:27.694325170Z" level=info msg="Forcibly stopping sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\"" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.727 [WARNING][5512] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0", GenerateName:"calico-kube-controllers-58cc6dbf49-", Namespace:"calico-system", SelfLink:"", UID:"6397532b-83a6-4d2d-bcc3-8908e6d508d3", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc6dbf49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"e93a10fcff6166e21476d44cf5c2f5073d5da3c4dcb219c1bb9f2a6f92bf4be7", Pod:"calico-kube-controllers-58cc6dbf49-rlrpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califccca6ac0c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.728 [INFO][5512] k8s.go 608: Cleaning up netns ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.730 [INFO][5512] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" iface="eth0" netns="" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.730 [INFO][5512] k8s.go 615: Releasing IP address(es) ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.731 [INFO][5512] utils.go 188: Calico CNI releasing IP address ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.750 [INFO][5518] ipam_plugin.go 411: Releasing address using handleID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.750 [INFO][5518] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.750 [INFO][5518] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.755 [WARNING][5518] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.755 [INFO][5518] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" HandleID="k8s-pod-network.bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-calico--kube--controllers--58cc6dbf49--rlrpx-eth0" Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.756 [INFO][5518] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.758434 containerd[1477]: 2024-06-25 16:28:27.757 [INFO][5512] k8s.go 621: Teardown processing complete. ContainerID="bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521" Jun 25 16:28:27.759159 containerd[1477]: time="2024-06-25T16:28:27.758485425Z" level=info msg="TearDown network for sandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" successfully" Jun 25 16:28:27.770710 containerd[1477]: time="2024-06-25T16:28:27.770663979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:28:27.770903 containerd[1477]: time="2024-06-25T16:28:27.770750287Z" level=info msg="RemovePodSandbox \"bd12f68f0ba4e08841a87bf886849e91ba9b3a2c6e3c3572fce74d4dce7e3521\" returns successfully" Jun 25 16:28:27.771361 containerd[1477]: time="2024-06-25T16:28:27.771327137Z" level=info msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.802 [WARNING][5536] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8474323b-f265-4427-9f9e-fd6fa285383b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c", Pod:"csi-node-driver-prpxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali533385cd980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.802 [INFO][5536] k8s.go 608: Cleaning up netns ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.802 [INFO][5536] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" iface="eth0" netns="" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.802 [INFO][5536] k8s.go 615: Releasing IP address(es) ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.802 [INFO][5536] utils.go 188: Calico CNI releasing IP address ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.821 [INFO][5543] ipam_plugin.go 411: Releasing address using handleID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.821 [INFO][5543] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.821 [INFO][5543] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.827 [WARNING][5543] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.827 [INFO][5543] ipam_plugin.go 439: Releasing address using workloadID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.828 [INFO][5543] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.830492 containerd[1477]: 2024-06-25 16:28:27.829 [INFO][5536] k8s.go 621: Teardown processing complete. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.831214 containerd[1477]: time="2024-06-25T16:28:27.830539063Z" level=info msg="TearDown network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" successfully" Jun 25 16:28:27.831214 containerd[1477]: time="2024-06-25T16:28:27.830578966Z" level=info msg="StopPodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" returns successfully" Jun 25 16:28:27.831214 containerd[1477]: time="2024-06-25T16:28:27.831113012Z" level=info msg="RemovePodSandbox for \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" Jun 25 16:28:27.831214 containerd[1477]: time="2024-06-25T16:28:27.831153316Z" level=info msg="Forcibly stopping sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\"" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.863 [WARNING][5561] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8474323b-f265-4427-9f9e-fd6fa285383b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"b0b70d7366790e686ebe9fb7bd38294f0227d389fc535aa291f2c87731f5008c", Pod:"csi-node-driver-prpxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali533385cd980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.863 [INFO][5561] k8s.go 608: Cleaning up netns ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.863 [INFO][5561] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" iface="eth0" netns="" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.863 [INFO][5561] k8s.go 615: Releasing IP address(es) ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.864 [INFO][5561] utils.go 188: Calico CNI releasing IP address ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.885 [INFO][5567] ipam_plugin.go 411: Releasing address using handleID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.885 [INFO][5567] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.885 [INFO][5567] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.890 [WARNING][5567] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.890 [INFO][5567] ipam_plugin.go 439: Releasing address using workloadID ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" HandleID="k8s-pod-network.06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-csi--node--driver--prpxb-eth0" Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.893 [INFO][5567] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.896692 containerd[1477]: 2024-06-25 16:28:27.894 [INFO][5561] k8s.go 621: Teardown processing complete. ContainerID="06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467" Jun 25 16:28:27.897670 containerd[1477]: time="2024-06-25T16:28:27.897630071Z" level=info msg="TearDown network for sandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" successfully" Jun 25 16:28:27.907298 containerd[1477]: time="2024-06-25T16:28:27.905884685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:28:27.907298 containerd[1477]: time="2024-06-25T16:28:27.905988394Z" level=info msg="RemovePodSandbox \"06b41fda860ef331b6feba1ab3b41ab690f9403076f873bceb8eb1a47219a467\" returns successfully" Jun 25 16:28:27.913680 containerd[1477]: time="2024-06-25T16:28:27.913632956Z" level=info msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.946 [WARNING][5585] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f124e573-6d0c-4a4e-b4c6-5bf17013ade6", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245", Pod:"coredns-5dd5756b68-45gk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d7d8a873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.946 [INFO][5585] k8s.go 608: Cleaning up netns ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.946 [INFO][5585] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" iface="eth0" netns="" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.946 [INFO][5585] k8s.go 615: Releasing IP address(es) ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.946 [INFO][5585] utils.go 188: Calico CNI releasing IP address ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.967 [INFO][5591] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.967 [INFO][5591] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.967 [INFO][5591] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.973 [WARNING][5591] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.973 [INFO][5591] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.974 [INFO][5591] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.977021 containerd[1477]: 2024-06-25 16:28:27.975 [INFO][5585] k8s.go 621: Teardown processing complete. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:27.977742 containerd[1477]: time="2024-06-25T16:28:27.977070348Z" level=info msg="TearDown network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" successfully" Jun 25 16:28:27.977742 containerd[1477]: time="2024-06-25T16:28:27.977109051Z" level=info msg="StopPodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" returns successfully" Jun 25 16:28:27.977836 containerd[1477]: time="2024-06-25T16:28:27.977762508Z" level=info msg="RemovePodSandbox for \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" Jun 25 16:28:27.977879 containerd[1477]: time="2024-06-25T16:28:27.977801611Z" level=info msg="Forcibly stopping sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\"" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.015 [WARNING][5609] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f124e573-6d0c-4a4e-b4c6-5bf17013ade6", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1b4fc7f4a0636478376a4e4275ac3909a29d994b94ccbd883a89a26c67cbc245", Pod:"coredns-5dd5756b68-45gk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d7d8a873", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.015 [INFO][5609] k8s.go 608: Cleaning up netns ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.015 [INFO][5609] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" iface="eth0" netns="" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.015 [INFO][5609] k8s.go 615: Releasing IP address(es) ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.015 [INFO][5609] utils.go 188: Calico CNI releasing IP address ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.036 [INFO][5615] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.036 [INFO][5615] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.036 [INFO][5615] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.042 [WARNING][5615] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.042 [INFO][5615] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" HandleID="k8s-pod-network.2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--45gk9-eth0" Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.043 [INFO][5615] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:28.046505 containerd[1477]: 2024-06-25 16:28:28.045 [INFO][5609] k8s.go 621: Teardown processing complete. ContainerID="2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b" Jun 25 16:28:28.047248 containerd[1477]: time="2024-06-25T16:28:28.046542102Z" level=info msg="TearDown network for sandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" successfully" Jun 25 16:28:28.057259 containerd[1477]: time="2024-06-25T16:28:28.057206311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:28:28.057519 containerd[1477]: time="2024-06-25T16:28:28.057294219Z" level=info msg="RemovePodSandbox \"2eeefd4adc8c2da5170dad866cc0a6c2a10c051cc9a2dddbffd113152d15421b\" returns successfully" Jun 25 16:28:28.057933 containerd[1477]: time="2024-06-25T16:28:28.057898970Z" level=info msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.092 [WARNING][5633] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"75c7c79a-d44c-47ce-a93f-c54170ddf76b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610", Pod:"coredns-5dd5756b68-d6rxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244ab3f2626", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.092 [INFO][5633] k8s.go 608: Cleaning up netns ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.092 [INFO][5633] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" iface="eth0" netns="" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.092 [INFO][5633] k8s.go 615: Releasing IP address(es) ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.092 [INFO][5633] utils.go 188: Calico CNI releasing IP address ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.111 [INFO][5639] ipam_plugin.go 411: Releasing address using handleID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.111 [INFO][5639] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.111 [INFO][5639] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.116 [WARNING][5639] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.116 [INFO][5639] ipam_plugin.go 439: Releasing address using workloadID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.118 [INFO][5639] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:28.120298 containerd[1477]: 2024-06-25 16:28:28.119 [INFO][5633] k8s.go 621: Teardown processing complete. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.121023 containerd[1477]: time="2024-06-25T16:28:28.120342593Z" level=info msg="TearDown network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" successfully" Jun 25 16:28:28.121023 containerd[1477]: time="2024-06-25T16:28:28.120381796Z" level=info msg="StopPodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" returns successfully" Jun 25 16:28:28.121023 containerd[1477]: time="2024-06-25T16:28:28.120938943Z" level=info msg="RemovePodSandbox for \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" Jun 25 16:28:28.121023 containerd[1477]: time="2024-06-25T16:28:28.120978047Z" level=info msg="Forcibly stopping sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\"" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.153 [WARNING][5657] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"75c7c79a-d44c-47ce-a93f-c54170ddf76b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-a46e2cd05c", ContainerID:"1e445883e3d5555d358f46d895275ed339d5ba2005979c6a0545e60cec08c610", Pod:"coredns-5dd5756b68-d6rxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244ab3f2626", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.153 [INFO][5657] k8s.go 608: Cleaning up netns ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.153 [INFO][5657] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" iface="eth0" netns="" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.153 [INFO][5657] k8s.go 615: Releasing IP address(es) ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.154 [INFO][5657] utils.go 188: Calico CNI releasing IP address ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.173 [INFO][5663] ipam_plugin.go 411: Releasing address using handleID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.173 [INFO][5663] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.173 [INFO][5663] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.178 [WARNING][5663] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.178 [INFO][5663] ipam_plugin.go 439: Releasing address using workloadID ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" HandleID="k8s-pod-network.da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Workload="ci--3815.2.4--a--a46e2cd05c-k8s-coredns--5dd5756b68--d6rxn-eth0" Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.180 [INFO][5663] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:28.184719 containerd[1477]: 2024-06-25 16:28:28.181 [INFO][5657] k8s.go 621: Teardown processing complete. ContainerID="da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e" Jun 25 16:28:28.184719 containerd[1477]: time="2024-06-25T16:28:28.182478389Z" level=info msg="TearDown network for sandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" successfully" Jun 25 16:28:28.190721 containerd[1477]: time="2024-06-25T16:28:28.190680388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:28:28.190871 containerd[1477]: time="2024-06-25T16:28:28.190804298Z" level=info msg="RemovePodSandbox \"da75339be487c8b337d96da9ba2da2acb61aca9e9a6c714cc598aa76ffa74f5e\" returns successfully" Jun 25 16:28:29.837626 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:28:29.837757 kernel: audit: type=1130 audit(1719332909.832:695): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.16.10:45896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.16.10:45896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.834166 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:45896.service - OpenSSH per-connection server daemon (10.200.16.10:45896). Jun 25 16:28:30.482000 audit[5678]: USER_ACCT pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.485856 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:30.486680 sshd[5678]: Accepted publickey for core from 10.200.16.10 port 45896 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:30.493002 systemd-logind[1471]: New session 10 of user core. Jun 25 16:28:30.505889 kernel: audit: type=1101 audit(1719332910.482:696): pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.505941 kernel: audit: type=1103 audit(1719332910.483:697): pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.483000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.505005 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:28:30.513436 kernel: audit: type=1006 audit(1719332910.483:698): pid=5678 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:28:30.483000 audit[5678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6a5023e0 a2=3 a3=7f31a4a26480 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:30.523038 kernel: audit: type=1300 audit(1719332910.483:698): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6a5023e0 a2=3 a3=7f31a4a26480 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:30.483000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:30.527967 kernel: audit: type=1327 audit(1719332910.483:698): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:30.509000 audit[5678]: USER_START pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.538566 kernel: audit: type=1105 audit(1719332910.509:699): pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.512000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:30.548355 kernel: audit: type=1103 audit(1719332910.512:700): pid=5680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:31.016817 sshd[5678]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:31.018000 audit[5678]: USER_END pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:31.020637 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:45896.service: Deactivated successfully. Jun 25 16:28:31.021613 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:28:31.023718 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:28:31.024749 systemd-logind[1471]: Removed session 10. Jun 25 16:28:31.018000 audit[5678]: CRED_DISP pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:31.039026 kernel: audit: type=1106 audit(1719332911.018:701): pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:31.039112 kernel: audit: type=1104 audit(1719332911.018:702): pid=5678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:31.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.16.10:45896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:31.100061 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.Uisqv4.mount: Deactivated successfully. Jun 25 16:28:36.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.16.10:48392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:36.130335 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:48392.service - OpenSSH per-connection server daemon (10.200.16.10:48392). Jun 25 16:28:36.143297 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:36.143571 kernel: audit: type=1130 audit(1719332916.129:704): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.16.10:48392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:36.766000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.769101 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:36.769722 sshd[5712]: Accepted publickey for core from 10.200.16.10 port 48392 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:36.775065 systemd-logind[1471]: New session 11 of user core. Jun 25 16:28:36.790868 kernel: audit: type=1101 audit(1719332916.766:705): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.790938 kernel: audit: type=1103 audit(1719332916.766:706): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.766000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.790038 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:28:36.796617 kernel: audit: type=1006 audit(1719332916.766:707): pid=5712 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:28:36.766000 audit[5712]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7c410160 a2=3 a3=7fe6b6663480 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.806644 kernel: audit: type=1300 audit(1719332916.766:707): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7c410160 a2=3 a3=7fe6b6663480 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.766000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:36.811736 kernel: audit: type=1327 audit(1719332916.766:707): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:36.797000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.822283 kernel: audit: type=1105 audit(1719332916.797:708): pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.797000 audit[5714]: CRED_ACQ pid=5714 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:36.832178 kernel: audit: type=1103 audit(1719332916.797:709): pid=5714 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:37.294294 sshd[5712]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:37.295000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:37.298399 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:28:37.299896 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:48392.service: Deactivated successfully. Jun 25 16:28:37.300696 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:28:37.301988 systemd-logind[1471]: Removed session 11. Jun 25 16:28:37.295000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:37.316133 kernel: audit: type=1106 audit(1719332917.295:710): pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:37.316244 kernel: audit: type=1104 audit(1719332917.295:711): pid=5712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:37.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.16.10:48392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.412557 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:48408.service - OpenSSH per-connection server daemon (10.200.16.10:48408). Jun 25 16:28:42.423299 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:42.423417 kernel: audit: type=1130 audit(1719332922.411:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.16.10:48408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.16.10:48408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.053000 audit[5732]: USER_ACCT pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.057123 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:43.057798 sshd[5732]: Accepted publickey for core from 10.200.16.10 port 48408 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:43.063756 systemd-logind[1471]: New session 12 of user core. Jun 25 16:28:43.076829 kernel: audit: type=1101 audit(1719332923.053:714): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.076862 kernel: audit: type=1103 audit(1719332923.055:715): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.076896 kernel: audit: type=1006 audit(1719332923.055:716): pid=5732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:28:43.055000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.076049 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:28:43.084127 kernel: audit: type=1300 audit(1719332923.055:716): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3ef4d560 a2=3 a3=7f923d590480 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:43.055000 audit[5732]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3ef4d560 a2=3 a3=7f923d590480 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:43.055000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:43.100457 kernel: audit: type=1327 audit(1719332923.055:716): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:43.080000 audit[5732]: USER_START pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.112147 kernel: audit: type=1105 audit(1719332923.080:717): pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.083000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.122157 kernel: audit: type=1103 audit(1719332923.083:718): pid=5734 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.565976 sshd[5732]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:43.566000 audit[5732]: USER_END pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.570236 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:28:43.571982 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:48408.service: Deactivated successfully. Jun 25 16:28:43.572967 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:28:43.574666 systemd-logind[1471]: Removed session 12. Jun 25 16:28:43.566000 audit[5732]: CRED_DISP pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.588371 kernel: audit: type=1106 audit(1719332923.566:719): pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.588456 kernel: audit: type=1104 audit(1719332923.566:720): pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:43.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.16.10:48408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.336355 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.GZrDnb.mount: Deactivated successfully. Jun 25 16:28:48.695163 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:48.695317 kernel: audit: type=1130 audit(1719332928.684:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.16.10:35902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:48.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.16.10:35902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:48.685149 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:35902.service - OpenSSH per-connection server daemon (10.200.16.10:35902). Jun 25 16:28:49.325000 audit[5770]: USER_ACCT pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.333427 systemd-logind[1471]: New session 13 of user core. Jun 25 16:28:49.370056 kernel: audit: type=1101 audit(1719332929.325:723): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.370111 kernel: audit: type=1103 audit(1719332929.326:724): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.370145 kernel: audit: type=1006 audit(1719332929.326:725): pid=5770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:28:49.370174 kernel: audit: type=1300 audit(1719332929.326:725): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3d2ae0b0 a2=3 a3=7fb03eb05480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:49.370202 kernel: audit: type=1327 audit(1719332929.326:725): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:49.326000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.326000 audit[5770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3d2ae0b0 a2=3 a3=7fb03eb05480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:49.326000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:49.370543 sshd[5770]: Accepted publickey for core from 10.200.16.10 port 35902 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:49.327699 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:49.369945 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:28:49.375000 audit[5770]: USER_START pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.388651 kernel: audit: type=1105 audit(1719332929.375:726): pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.388749 kernel: audit: type=1103 audit(1719332929.376:727): pid=5772 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.376000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.840630 sshd[5770]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:49.841000 audit[5770]: USER_END pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.844764 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:28:49.846535 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:35902.service: Deactivated successfully. Jun 25 16:28:49.847478 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:28:49.849541 systemd-logind[1471]: Removed session 13. Jun 25 16:28:49.841000 audit[5770]: CRED_DISP pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.862546 kernel: audit: type=1106 audit(1719332929.841:728): pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.863395 kernel: audit: type=1104 audit(1719332929.841:729): pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:49.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.16.10:35902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:49.960158 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:35908.service - OpenSSH per-connection server daemon (10.200.16.10:35908). Jun 25 16:28:49.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.4:22-10.200.16.10:35908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:50.625214 sshd[5782]: Accepted publickey for core from 10.200.16.10 port 35908 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:50.624000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:50.627171 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:50.626000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:50.626000 audit[5782]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca4bdd4e0 a2=3 a3=7fa4f3e15480 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:50.626000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:50.632666 systemd-logind[1471]: New session 14 of user core. Jun 25 16:28:50.640793 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:28:50.645000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:50.646000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:51.904385 sshd[5782]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:51.905000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:51.905000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:51.907974 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:35908.service: Deactivated successfully. Jun 25 16:28:51.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.4:22-10.200.16.10:35908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:51.909393 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:28:51.909479 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:28:51.910928 systemd-logind[1471]: Removed session 14. Jun 25 16:28:52.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.4:22-10.200.16.10:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:52.024151 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:35920.service - OpenSSH per-connection server daemon (10.200.16.10:35920). Jun 25 16:28:52.681000 audit[5797]: USER_ACCT pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:52.681930 sshd[5797]: Accepted publickey for core from 10.200.16.10 port 35920 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:52.682000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:52.682000 audit[5797]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb796d480 a2=3 a3=7fe9735a4480 items=0 ppid=1 pid=5797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.682000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:52.683815 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:52.690466 systemd-logind[1471]: New session 15 of user core. Jun 25 16:28:52.693777 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:28:52.698000 audit[5797]: USER_START pid=5797 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:52.700000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:53.207867 sshd[5797]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:53.209000 audit[5797]: USER_END pid=5797 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:53.209000 audit[5797]: CRED_DISP pid=5797 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:53.211680 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:35920.service: Deactivated successfully. Jun 25 16:28:53.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.4:22-10.200.16.10:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:53.212828 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:28:53.213511 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:28:53.214445 systemd-logind[1471]: Removed session 15. Jun 25 16:28:58.336273 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:28:58.336407 kernel: audit: type=1130 audit(1719332938.324:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.4:22-10.200.16.10:44762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.4:22-10.200.16.10:44762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.326191 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:44762.service - OpenSSH per-connection server daemon (10.200.16.10:44762). Jun 25 16:28:58.972000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:58.973951 sshd[5809]: Accepted publickey for core from 10.200.16.10 port 44762 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:28:58.975728 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:58.981430 systemd-logind[1471]: New session 16 of user core. Jun 25 16:28:59.002297 kernel: audit: type=1101 audit(1719332938.972:750): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.002345 kernel: audit: type=1103 audit(1719332938.972:751): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.002374 kernel: audit: type=1006 audit(1719332938.972:752): pid=5809 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:28:59.002395 kernel: audit: type=1300 audit(1719332938.972:752): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcaa5ca790 a2=3 a3=7fb7b2b37480 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.972000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:58.972000 audit[5809]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcaa5ca790 a2=3 a3=7fb7b2b37480 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:59.001944 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:28:58.972000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:59.016242 kernel: audit: type=1327 audit(1719332938.972:752): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:59.000000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.026434 kernel: audit: type=1105 audit(1719332939.000:753): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.006000 audit[5811]: CRED_ACQ pid=5811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.035922 kernel: audit: type=1103 audit(1719332939.006:754): pid=5811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.491035 sshd[5809]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:59.490000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.494409 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:44762.service: Deactivated successfully. Jun 25 16:28:59.495270 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:28:59.496829 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:28:59.497781 systemd-logind[1471]: Removed session 16. Jun 25 16:28:59.491000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.512794 kernel: audit: type=1106 audit(1719332939.490:755): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.512926 kernel: audit: type=1104 audit(1719332939.491:756): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:28:59.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.4:22-10.200.16.10:44762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:01.097106 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.18pUUq.mount: Deactivated successfully. Jun 25 16:29:04.613549 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:59908.service - OpenSSH per-connection server daemon (10.200.16.10:59908). Jun 25 16:29:04.626694 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:04.626794 kernel: audit: type=1130 audit(1719332944.614:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.16.10:59908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:04.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.16.10:59908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:05.250000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.252722 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:05.254067 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 59908 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:05.251000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.265903 systemd-logind[1471]: New session 17 of user core. Jun 25 16:29:05.283498 kernel: audit: type=1101 audit(1719332945.250:759): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.283537 kernel: audit: type=1103 audit(1719332945.251:760): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.283559 kernel: audit: type=1006 audit(1719332945.251:761): pid=5867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:29:05.283581 kernel: audit: type=1300 audit(1719332945.251:761): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66670ea0 a2=3 a3=7f60d4a1b480 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.251000 audit[5867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66670ea0 a2=3 a3=7f60d4a1b480 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.282707 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:29:05.295897 kernel: audit: type=1327 audit(1719332945.251:761): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:05.251000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:05.288000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.309738 kernel: audit: type=1105 audit(1719332945.288:762): pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.295000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.320054 kernel: audit: type=1103 audit(1719332945.295:763): pid=5869 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.771154 sshd[5867]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:05.772000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.776679 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:59908.service: Deactivated successfully. Jun 25 16:29:05.777466 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:29:05.779253 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:29:05.780174 systemd-logind[1471]: Removed session 17. Jun 25 16:29:05.774000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.793293 kernel: audit: type=1106 audit(1719332945.772:764): pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.793387 kernel: audit: type=1104 audit(1719332945.774:765): pid=5867 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:05.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.16.10:59908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:10.893428 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:59920.service - OpenSSH per-connection server daemon (10.200.16.10:59920). Jun 25 16:29:10.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.16.10:59920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:10.896301 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:10.896416 kernel: audit: type=1130 audit(1719332950.893:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.16.10:59920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:11.538000 audit[5878]: USER_ACCT pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.539676 sshd[5878]: Accepted publickey for core from 10.200.16.10 port 59920 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:11.541501 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:11.550854 kernel: audit: type=1101 audit(1719332951.538:768): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.540000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.554338 systemd-logind[1471]: New session 18 of user core. Jun 25 16:29:11.586030 kernel: audit: type=1103 audit(1719332951.540:769): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.586095 kernel: audit: type=1006 audit(1719332951.540:770): pid=5878 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:29:11.586127 kernel: audit: type=1300 audit(1719332951.540:770): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1c78d070 a2=3 a3=7f3582ecc480 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.586155 kernel: audit: type=1327 audit(1719332951.540:770): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:11.540000 audit[5878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1c78d070 a2=3 a3=7f3582ecc480 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.540000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:11.585900 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:29:11.590000 audit[5878]: USER_START pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.598000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.629861 kernel: audit: type=1105 audit(1719332951.590:771): pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:11.630036 kernel: audit: type=1103 audit(1719332951.598:772): pid=5882 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:12.072578 sshd[5878]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:12.073000 audit[5878]: USER_END pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:12.077790 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:59920.service: Deactivated successfully. Jun 25 16:29:12.078631 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:29:12.082295 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:29:12.083244 systemd-logind[1471]: Removed session 18. Jun 25 16:29:12.075000 audit[5878]: CRED_DISP pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:12.094435 kernel: audit: type=1106 audit(1719332952.073:773): pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:12.094538 kernel: audit: type=1104 audit(1719332952.075:774): pid=5878 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:12.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.16.10:59920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:14.335145 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.QD5hjE.mount: Deactivated successfully. Jun 25 16:29:17.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.16.10:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:17.193187 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:35796.service - OpenSSH per-connection server daemon (10.200.16.10:35796). Jun 25 16:29:17.195553 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:17.195820 kernel: audit: type=1130 audit(1719332957.191:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.16.10:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:17.834000 audit[5918]: USER_ACCT pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.836402 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 35796 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:17.838299 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:17.834000 audit[5918]: CRED_ACQ pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.852298 systemd-logind[1471]: New session 19 of user core. Jun 25 16:29:17.867438 kernel: audit: type=1101 audit(1719332957.834:777): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.867477 kernel: audit: type=1103 audit(1719332957.834:778): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.867503 kernel: audit: type=1006 audit(1719332957.834:779): pid=5918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:29:17.866892 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:29:17.834000 audit[5918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5248bf20 a2=3 a3=7f0e7cc7d480 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.881957 kernel: audit: type=1300 audit(1719332957.834:779): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5248bf20 a2=3 a3=7f0e7cc7d480 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.834000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:17.886872 kernel: audit: type=1327 audit(1719332957.834:779): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:17.871000 audit[5918]: USER_START pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.897823 kernel: audit: type=1105 audit(1719332957.871:780): pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.871000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:17.908036 kernel: audit: type=1103 audit(1719332957.871:781): pid=5920 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:18.351344 sshd[5918]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:18.351000 audit[5918]: USER_END pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:18.355151 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:35796.service: Deactivated successfully. Jun 25 16:29:18.356148 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:29:18.358085 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:29:18.359168 systemd-logind[1471]: Removed session 19. Jun 25 16:29:18.351000 audit[5918]: CRED_DISP pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:18.374958 kernel: audit: type=1106 audit(1719332958.351:782): pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:18.375060 kernel: audit: type=1104 audit(1719332958.351:783): pid=5918 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:18.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.16.10:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:18.467169 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:35808.service - OpenSSH per-connection server daemon (10.200.16.10:35808). Jun 25 16:29:18.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.4:22-10.200.16.10:35808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:19.114000 audit[5930]: USER_ACCT pid=5930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.116670 sshd[5930]: Accepted publickey for core from 10.200.16.10 port 35808 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:19.116000 audit[5930]: CRED_ACQ pid=5930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.116000 audit[5930]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedce0b9d0 a2=3 a3=7f77f0346480 items=0 ppid=1 pid=5930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:19.116000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:19.119619 sshd[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:19.124957 systemd-logind[1471]: New session 20 of user core. Jun 25 16:29:19.132786 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:29:19.136000 audit[5930]: USER_START pid=5930 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.137000 audit[5932]: CRED_ACQ pid=5932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.697080 sshd[5930]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:19.696000 audit[5930]: USER_END pid=5930 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.697000 audit[5930]: CRED_DISP pid=5930 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:19.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.4:22-10.200.16.10:35808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:19.700075 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:35808.service: Deactivated successfully. Jun 25 16:29:19.701865 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:29:19.701899 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:29:19.703209 systemd-logind[1471]: Removed session 20. Jun 25 16:29:19.815604 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:35824.service - OpenSSH per-connection server daemon (10.200.16.10:35824). Jun 25 16:29:19.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.4:22-10.200.16.10:35824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:20.459000 audit[5939]: USER_ACCT pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:20.461569 sshd[5939]: Accepted publickey for core from 10.200.16.10 port 35824 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:20.461000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:20.461000 audit[5939]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1e368850 a2=3 a3=7f9fe9139480 items=0 ppid=1 pid=5939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:20.461000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:20.463324 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:20.468490 systemd-logind[1471]: New session 21 of user core. Jun 25 16:29:20.469785 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:29:20.473000 audit[5939]: USER_START pid=5939 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:20.474000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:21.683000 audit[5951]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.683000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff38136a00 a2=0 a3=7fff381369ec items=0 ppid=2968 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.684000 audit[5951]: NETFILTER_CFG table=nat:134 family=2 entries=22 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.684000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff38136a00 a2=0 a3=0 items=0 ppid=2968 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.705000 audit[5953]: NETFILTER_CFG table=filter:135 family=2 entries=32 op=nft_register_rule pid=5953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.705000 audit[5953]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff532bf6b0 a2=0 a3=7fff532bf69c items=0 ppid=2968 pid=5953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.707000 audit[5953]: NETFILTER_CFG table=nat:136 family=2 entries=22 op=nft_register_rule pid=5953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:21.707000 audit[5953]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff532bf6b0 a2=0 a3=0 items=0 ppid=2968 pid=5953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:21.781837 sshd[5939]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:21.782000 audit[5939]: USER_END pid=5939 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:21.782000 audit[5939]: CRED_DISP pid=5939 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:21.785970 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:35824.service: Deactivated successfully. Jun 25 16:29:21.787033 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:29:21.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.4:22-10.200.16.10:35824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:21.788391 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:29:21.789531 systemd-logind[1471]: Removed session 21. Jun 25 16:29:21.902642 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:35834.service - OpenSSH per-connection server daemon (10.200.16.10:35834). Jun 25 16:29:21.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.4:22-10.200.16.10:35834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:22.550054 kernel: kauditd_printk_skb: 36 callbacks suppressed Jun 25 16:29:22.559627 kernel: audit: type=1101 audit(1719332962.545:808): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.545000 audit[5956]: USER_ACCT pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.548897 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:22.560134 sshd[5956]: Accepted publickey for core from 10.200.16.10 port 35834 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:22.546000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.566752 systemd-logind[1471]: New session 22 of user core. Jun 25 16:29:22.595048 kernel: audit: type=1103 audit(1719332962.546:809): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.595208 kernel: audit: type=1006 audit(1719332962.546:810): pid=5956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:29:22.546000 audit[5956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa5983d80 a2=3 a3=7f34ce2e3480 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:22.595981 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:29:22.606559 kernel: audit: type=1300 audit(1719332962.546:810): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa5983d80 a2=3 a3=7f34ce2e3480 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:22.546000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:22.602000 audit[5956]: USER_START pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.626765 kernel: audit: type=1327 audit(1719332962.546:810): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:22.626844 kernel: audit: type=1105 audit(1719332962.602:811): pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.626868 kernel: audit: type=1103 audit(1719332962.604:812): pid=5958 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:22.604000 audit[5958]: CRED_ACQ pid=5958 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:23.268038 sshd[5956]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:23.268000 audit[5956]: USER_END pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:23.271247 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:35834.service: Deactivated successfully. Jun 25 16:29:23.272100 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:29:23.275390 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:29:23.276505 systemd-logind[1471]: Removed session 22. Jun 25 16:29:23.268000 audit[5956]: CRED_DISP pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:23.289834 kernel: audit: type=1106 audit(1719332963.268:813): pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:23.289943 kernel: audit: type=1104 audit(1719332963.268:814): pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:23.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.4:22-10.200.16.10:35834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.314977 kernel: audit: type=1131 audit(1719332963.269:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.4:22-10.200.16.10:35834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.341000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.341000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.341000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0029f36c0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:23.341000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d7de00 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:23.341000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:23.341000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:23.385482 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:35840.service - OpenSSH per-connection server daemon (10.200.16.10:35840). Jun 25 16:29:23.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.4:22-10.200.16.10:35840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:23.608000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.608000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c016d54000 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.608000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:23.609000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.609000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c0126d5ce0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.609000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:23.609000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.609000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c01215a4e0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.609000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:23.629000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.629000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c016d54270 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.629000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:23.661000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.661000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c012a54080 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.661000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:23.661000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:23.661000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c01215a750 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:29:23.661000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:29:24.029000 audit[5971]: USER_ACCT pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.030229 sshd[5971]: Accepted publickey for core from 10.200.16.10 port 35840 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:24.030000 audit[5971]: CRED_ACQ pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.031000 audit[5971]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2a3bc9e0 a2=3 a3=7f3ac6c48480 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:24.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:24.032145 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:24.037682 systemd-logind[1471]: New session 23 of user core. Jun 25 16:29:24.041778 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:29:24.046000 audit[5971]: USER_START pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.047000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.545768 sshd[5971]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:24.547000 audit[5971]: USER_END pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.547000 audit[5971]: CRED_DISP pid=5971 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:24.550095 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:35840.service: Deactivated successfully. Jun 25 16:29:24.551019 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:29:24.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.4:22-10.200.16.10:35840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:24.551618 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:29:24.552458 systemd-logind[1471]: Removed session 23. Jun 25 16:29:26.235207 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.OY5458.mount: Deactivated successfully. Jun 25 16:29:27.040000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:27.040000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002b6cde0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:27.040000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:27.043000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:27.043000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0029f3ba0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:27.043000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:27.045000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:27.045000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002b6ce00 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:27.045000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:27.046000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:27.046000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002b6cfa0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:29:27.046000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:29.662454 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:46804.service - OpenSSH per-connection server daemon (10.200.16.10:46804). Jun 25 16:29:29.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.4:22-10.200.16.10:46804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.664493 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 16:29:29.664605 kernel: audit: type=1130 audit(1719332969.662:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.4:22-10.200.16.10:46804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:30.305000 audit[6005]: USER_ACCT pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.314058 systemd-logind[1471]: New session 24 of user core. Jun 25 16:29:30.334889 kernel: audit: type=1101 audit(1719332970.305:838): pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.334946 kernel: audit: type=1103 audit(1719332970.307:839): pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.334974 kernel: audit: type=1006 audit(1719332970.307:840): pid=6005 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:29:30.335000 kernel: audit: type=1300 audit(1719332970.307:840): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb24a8be0 a2=3 a3=7f31dd5e5480 items=0 ppid=1 pid=6005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:30.307000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.307000 audit[6005]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb24a8be0 a2=3 a3=7f31dd5e5480 items=0 ppid=1 pid=6005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:30.335184 sshd[6005]: Accepted publickey for core from 10.200.16.10 port 46804 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:30.308223 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:30.333996 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:29:30.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:30.350865 kernel: audit: type=1327 audit(1719332970.307:840): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:30.339000 audit[6005]: USER_START pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.363771 kernel: audit: type=1105 audit(1719332970.339:841): pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.344000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.375638 kernel: audit: type=1103 audit(1719332970.344:842): pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.817171 sshd[6005]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:30.818000 audit[6005]: USER_END pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.821244 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:46804.service: Deactivated successfully. Jun 25 16:29:30.822066 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:29:30.819000 audit[6005]: CRED_DISP pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.831550 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:29:30.832710 systemd-logind[1471]: Removed session 24. Jun 25 16:29:30.840377 kernel: audit: type=1106 audit(1719332970.818:843): pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.840467 kernel: audit: type=1104 audit(1719332970.819:844): pid=6005 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.4:22-10.200.16.10:46804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:31.096367 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.LlaMsa.mount: Deactivated successfully. Jun 25 16:29:35.941549 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:47486.service - OpenSSH per-connection server daemon (10.200.16.10:47486). Jun 25 16:29:35.954645 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:35.954752 kernel: audit: type=1130 audit(1719332975.941:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.16.10:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:35.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.16.10:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:36.587000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.590003 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:36.591025 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 47486 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:36.587000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.607359 systemd-logind[1471]: New session 25 of user core. Jun 25 16:29:36.618553 kernel: audit: type=1101 audit(1719332976.587:847): pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.618607 kernel: audit: type=1103 audit(1719332976.587:848): pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.618631 kernel: audit: type=1006 audit(1719332976.587:849): pid=6044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:29:36.618653 kernel: audit: type=1300 audit(1719332976.587:849): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda4e48f60 a2=3 a3=7f9fbae12480 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:36.587000 audit[6044]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda4e48f60 a2=3 a3=7f9fbae12480 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:36.617897 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:29:36.587000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:36.627000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.646040 kernel: audit: type=1327 audit(1719332976.587:849): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:36.646155 kernel: audit: type=1105 audit(1719332976.627:850): pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.634000 audit[6046]: CRED_ACQ pid=6046 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:36.656671 kernel: audit: type=1103 audit(1719332976.634:851): pid=6046 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:37.105866 sshd[6044]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:37.106000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:37.110315 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:47486.service: Deactivated successfully. Jun 25 16:29:37.111149 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:29:37.119664 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:29:37.120826 kernel: audit: type=1106 audit(1719332977.106:852): pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:37.107000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:37.121233 systemd-logind[1471]: Removed session 25. Jun 25 16:29:37.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.16.10:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:37.131616 kernel: audit: type=1104 audit(1719332977.107:853): pid=6044 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:41.837649 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:41.837795 kernel: audit: type=1325 audit(1719332981.833:855): table=filter:137 family=2 entries=20 op=nft_register_rule pid=6057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:41.833000 audit[6057]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=6057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:41.833000 audit[6057]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe6cf47600 a2=0 a3=7ffe6cf475ec items=0 ppid=2968 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:41.854397 kernel: audit: type=1300 audit(1719332981.833:855): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe6cf47600 a2=0 a3=7ffe6cf475ec items=0 ppid=2968 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:41.833000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:41.861242 kernel: audit: type=1327 audit(1719332981.833:855): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:41.835000 audit[6057]: NETFILTER_CFG table=nat:138 family=2 entries=106 op=nft_register_chain pid=6057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:41.867691 kernel: audit: type=1325 audit(1719332981.835:856): table=nat:138 family=2 entries=106 op=nft_register_chain pid=6057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:41.835000 audit[6057]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe6cf47600 a2=0 a3=7ffe6cf475ec items=0 ppid=2968 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:41.879951 kernel: audit: type=1300 audit(1719332981.835:856): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe6cf47600 a2=0 a3=7ffe6cf475ec items=0 ppid=2968 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:41.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:41.886660 kernel: audit: type=1327 audit(1719332981.835:856): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:42.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.16.10:47490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.228148 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:47490.service - OpenSSH per-connection server daemon (10.200.16.10:47490). Jun 25 16:29:42.238628 kernel: audit: type=1130 audit(1719332982.227:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.16.10:47490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.870000 audit[6060]: USER_ACCT pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:42.872567 sshd[6060]: Accepted publickey for core from 10.200.16.10 port 47490 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:42.874453 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:42.882004 systemd-logind[1471]: New session 26 of user core. Jun 25 16:29:42.894752 kernel: audit: type=1101 audit(1719332982.870:858): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:42.894799 kernel: audit: type=1103 audit(1719332982.872:859): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:42.894818 kernel: audit: type=1006 audit(1719332982.872:860): pid=6060 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:29:42.872000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:42.894025 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:29:42.872000 audit[6060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff834ee600 a2=3 a3=7f570155c480 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:42.872000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:42.900000 audit[6060]: USER_START pid=6060 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:42.902000 audit[6062]: CRED_ACQ pid=6062 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:43.385735 sshd[6060]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:43.385000 audit[6060]: USER_END pid=6060 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:43.385000 audit[6060]: CRED_DISP pid=6060 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:43.389137 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:47490.service: Deactivated successfully. Jun 25 16:29:43.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.16.10:47490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:43.390314 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:29:43.391333 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:29:43.392327 systemd-logind[1471]: Removed session 26. Jun 25 16:29:44.334528 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.Bzy8El.mount: Deactivated successfully. Jun 25 16:29:48.516222 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:29:48.520293 kernel: audit: type=1130 audit(1719332988.503:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.16.10:57010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:48.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.16.10:57010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:48.505159 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:57010.service - OpenSSH per-connection server daemon (10.200.16.10:57010). Jun 25 16:29:49.142000 audit[6099]: USER_ACCT pid=6099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.143932 sshd[6099]: Accepted publickey for core from 10.200.16.10 port 57010 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:49.145869 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:49.143000 audit[6099]: CRED_ACQ pid=6099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.157948 systemd-logind[1471]: New session 27 of user core. Jun 25 16:29:49.174969 kernel: audit: type=1101 audit(1719332989.142:867): pid=6099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.175019 kernel: audit: type=1103 audit(1719332989.143:868): pid=6099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.175044 kernel: audit: type=1006 audit(1719332989.143:869): pid=6099 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:29:49.175067 kernel: audit: type=1300 audit(1719332989.143:869): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd91349320 a2=3 a3=7f5614db3480 items=0 ppid=1 pid=6099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:49.143000 audit[6099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd91349320 a2=3 a3=7f5614db3480 items=0 ppid=1 pid=6099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:49.174132 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:29:49.143000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:49.189190 kernel: audit: type=1327 audit(1719332989.143:869): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:49.180000 audit[6099]: USER_START pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.200000 kernel: audit: type=1105 audit(1719332989.180:870): pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.182000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.210102 kernel: audit: type=1103 audit(1719332989.182:871): pid=6103 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.653422 sshd[6099]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:49.653000 audit[6099]: USER_END pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.657274 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:29:49.659057 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:57010.service: Deactivated successfully. Jun 25 16:29:49.660001 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:29:49.661910 systemd-logind[1471]: Removed session 27. Jun 25 16:29:49.653000 audit[6099]: CRED_DISP pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.677016 kernel: audit: type=1106 audit(1719332989.653:872): pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.677133 kernel: audit: type=1104 audit(1719332989.653:873): pid=6099 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:49.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.16.10:57010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:54.787732 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.16.10:33230.service - OpenSSH per-connection server daemon (10.200.16.10:33230). Jun 25 16:29:54.800863 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:54.801022 kernel: audit: type=1130 audit(1719332994.786:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.16.10:33230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:54.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.16.10:33230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:55.429000 audit[6118]: USER_ACCT pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.432721 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 33230 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:55.432535 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:55.429000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.447325 systemd-logind[1471]: New session 28 of user core. Jun 25 16:29:55.460840 kernel: audit: type=1101 audit(1719332995.429:876): pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.460897 kernel: audit: type=1103 audit(1719332995.429:877): pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.460930 kernel: audit: type=1006 audit(1719332995.429:878): pid=6118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:29:55.459949 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:29:55.429000 audit[6118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc6b0d920 a2=3 a3=7f54fde61480 items=0 ppid=1 pid=6118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:55.470684 kernel: audit: type=1300 audit(1719332995.429:878): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc6b0d920 a2=3 a3=7f54fde61480 items=0 ppid=1 pid=6118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:55.429000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:55.475859 kernel: audit: type=1327 audit(1719332995.429:878): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:55.465000 audit[6118]: USER_START pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.486896 kernel: audit: type=1105 audit(1719332995.465:879): pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.471000 audit[6120]: CRED_ACQ pid=6120 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.498133 kernel: audit: type=1103 audit(1719332995.471:880): pid=6120 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.942080 sshd[6118]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:55.942000 audit[6118]: USER_END pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.946356 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:29:55.947913 systemd[1]: sshd@25-10.200.8.4:22-10.200.16.10:33230.service: Deactivated successfully. Jun 25 16:29:55.948896 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:29:55.950529 systemd-logind[1471]: Removed session 28. Jun 25 16:29:55.942000 audit[6118]: CRED_DISP pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.963796 kernel: audit: type=1106 audit(1719332995.942:881): pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.963923 kernel: audit: type=1104 audit(1719332995.942:882): pid=6118 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:55.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.16.10:33230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:01.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.16.10:33240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:01.061353 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.16.10:33240.service - OpenSSH per-connection server daemon (10.200.16.10:33240). Jun 25 16:30:01.064234 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:01.064303 kernel: audit: type=1130 audit(1719333001.061:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.16.10:33240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:01.115448 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.PwAzGl.mount: Deactivated successfully. Jun 25 16:30:01.714000 audit[6130]: USER_ACCT pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.717224 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:01.719067 sshd[6130]: Accepted publickey for core from 10.200.16.10 port 33240 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:30:01.714000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.731568 systemd-logind[1471]: New session 29 of user core. Jun 25 16:30:01.739499 kernel: audit: type=1101 audit(1719333001.714:885): pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.739624 kernel: audit: type=1103 audit(1719333001.714:886): pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.739660 kernel: audit: type=1006 audit(1719333001.714:887): pid=6130 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 16:30:01.738900 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 16:30:01.714000 audit[6130]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff386e9310 a2=3 a3=7f5b116c5480 items=0 ppid=1 pid=6130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.754125 kernel: audit: type=1300 audit(1719333001.714:887): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff386e9310 a2=3 a3=7f5b116c5480 items=0 ppid=1 pid=6130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.714000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:01.758869 kernel: audit: type=1327 audit(1719333001.714:887): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:01.744000 audit[6130]: USER_START pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.780654 kernel: audit: type=1105 audit(1719333001.744:888): pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.744000 audit[6155]: CRED_ACQ pid=6155 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:01.793637 kernel: audit: type=1103 audit(1719333001.744:889): pid=6155 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:02.231941 sshd[6130]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:02.233000 audit[6130]: USER_END pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:02.235662 systemd[1]: sshd@26-10.200.8.4:22-10.200.16.10:33240.service: Deactivated successfully. Jun 25 16:30:02.236483 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 16:30:02.240259 systemd-logind[1471]: Session 29 logged out. Waiting for processes to exit. Jun 25 16:30:02.241263 systemd-logind[1471]: Removed session 29. Jun 25 16:30:02.233000 audit[6130]: CRED_DISP pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:02.254287 kernel: audit: type=1106 audit(1719333002.233:890): pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:02.254392 kernel: audit: type=1104 audit(1719333002.233:891): pid=6130 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:02.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.16.10:33240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:07.355530 systemd[1]: Started sshd@27-10.200.8.4:22-10.200.16.10:56938.service - OpenSSH per-connection server daemon (10.200.16.10:56938). Jun 25 16:30:07.366219 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:07.366347 kernel: audit: type=1130 audit(1719333007.355:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.16.10:56938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:07.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.16.10:56938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:07.998000 audit[6180]: USER_ACCT pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.000900 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:08.002105 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 56938 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:30:07.999000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.015861 systemd-logind[1471]: New session 30 of user core. Jun 25 16:30:08.027845 kernel: audit: type=1101 audit(1719333007.998:894): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.027885 kernel: audit: type=1103 audit(1719333007.999:895): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.027911 kernel: audit: type=1006 audit(1719333007.999:896): pid=6180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jun 25 16:30:08.027937 kernel: audit: type=1300 audit(1719333007.999:896): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd731ebd30 a2=3 a3=7fe2c6a49480 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:07.999000 audit[6180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd731ebd30 a2=3 a3=7fe2c6a49480 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:08.027040 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 16:30:07.999000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:08.042616 kernel: audit: type=1327 audit(1719333007.999:896): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:08.033000 audit[6180]: USER_START pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.055437 kernel: audit: type=1105 audit(1719333008.033:897): pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.037000 audit[6182]: CRED_ACQ pid=6182 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.068623 kernel: audit: type=1103 audit(1719333008.037:898): pid=6182 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.518910 sshd[6180]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:08.520000 audit[6180]: USER_END pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.527897 systemd[1]: sshd@27-10.200.8.4:22-10.200.16.10:56938.service: Deactivated successfully. Jun 25 16:30:08.528887 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 16:30:08.530496 systemd-logind[1471]: Session 30 logged out. Waiting for processes to exit. Jun 25 16:30:08.531455 systemd-logind[1471]: Removed session 30. Jun 25 16:30:08.524000 audit[6180]: CRED_DISP pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.544121 kernel: audit: type=1106 audit(1719333008.520:899): pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.544238 kernel: audit: type=1104 audit(1719333008.524:900): pid=6180 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:08.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.16.10:56938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:13.641695 systemd[1]: Started sshd@28-10.200.8.4:22-10.200.16.10:56954.service - OpenSSH per-connection server daemon (10.200.16.10:56954). Jun 25 16:30:13.655354 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:13.655475 kernel: audit: type=1130 audit(1719333013.641:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.4:22-10.200.16.10:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:13.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.4:22-10.200.16.10:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:14.306000 audit[6196]: USER_ACCT pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.309352 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:14.311074 sshd[6196]: Accepted publickey for core from 10.200.16.10 port 56954 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:30:14.318621 kernel: audit: type=1101 audit(1719333014.306:903): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.308000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.340548 kernel: audit: type=1103 audit(1719333014.308:904): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.340663 kernel: audit: type=1006 audit(1719333014.308:905): pid=6196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jun 25 16:30:14.341794 kernel: audit: type=1300 audit(1719333014.308:905): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95e98f80 a2=3 a3=7f418c13e480 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:14.308000 audit[6196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95e98f80 a2=3 a3=7f418c13e480 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:14.354121 kernel: audit: type=1327 audit(1719333014.308:905): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:14.308000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:14.356304 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.FwdijU.mount: Deactivated successfully. Jun 25 16:30:14.361187 systemd-logind[1471]: New session 31 of user core. Jun 25 16:30:14.366835 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 16:30:14.375000 audit[6196]: USER_START pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.393628 kernel: audit: type=1105 audit(1719333014.375:906): pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.394000 audit[6212]: CRED_ACQ pid=6212 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.406639 kernel: audit: type=1103 audit(1719333014.394:907): pid=6212 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.826157 sshd[6196]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:14.827000 audit[6196]: USER_END pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.829841 systemd[1]: sshd@28-10.200.8.4:22-10.200.16.10:56954.service: Deactivated successfully. Jun 25 16:30:14.830667 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 16:30:14.832343 systemd-logind[1471]: Session 31 logged out. Waiting for processes to exit. Jun 25 16:30:14.833280 systemd-logind[1471]: Removed session 31. Jun 25 16:30:14.841617 kernel: audit: type=1106 audit(1719333014.827:908): pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.841714 kernel: audit: type=1104 audit(1719333014.827:909): pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.827000 audit[6196]: CRED_DISP pid=6196 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:14.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.4:22-10.200.16.10:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:23.343000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.345922 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:23.346036 kernel: audit: type=1400 audit(1719333023.343:912): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.343000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.366298 kernel: audit: type=1400 audit(1719333023.343:911): avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.343000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002496840 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:23.389169 kernel: audit: type=1300 audit(1719333023.343:911): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002496840 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:23.343000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:23.401317 kernel: audit: type=1327 audit(1719333023.343:911): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:23.343000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00099fa40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:23.343000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:23.427297 kernel: audit: type=1300 audit(1719333023.343:912): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00099fa40 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:23.427425 kernel: audit: type=1327 audit(1719333023.343:912): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:23.611000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.611000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c007472360 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.643557 kernel: audit: type=1400 audit(1719333023.611:913): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.643763 kernel: audit: type=1300 audit(1719333023.611:913): arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c007472360 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.611000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.657621 kernel: audit: type=1327 audit(1719333023.611:913): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.617000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.671262 kernel: audit: type=1400 audit(1719333023.617:914): avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.617000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00e995540 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.617000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.619000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.619000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c0074087b0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.619000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.638000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.638000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c0074725d0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.638000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.662000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.662000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c0074090b0 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.662000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:23.663000 audit[2655]: AVC avc: denied { watch } for pid=2655 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c70,c623 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:23.663000 audit[2655]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00e531c00 a2=fc6 a3=0 items=0 ppid=2527 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c70,c623 key=(null) Jun 25 16:30:23.663000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E34002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:30:26.235990 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.qnKXvd.mount: Deactivated successfully. Jun 25 16:30:27.042000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:27.042000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002363d80 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:27.042000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:27.044000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:27.044000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00099ff20 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:27.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:27.046000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:27.046000 audit[2707]: AVC avc: denied { watch } for pid=2707 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:27.046000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002363da0 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:27.046000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:27.046000 audit[2707]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d0a200 a2=fc6 a3=0 items=0 ppid=2531 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:27.046000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:29.011771 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:30:29.011939 kernel: audit: type=1334 audit(1719333029.003:923): prog-id=141 op=UNLOAD Jun 25 16:30:29.003000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:30:29.004042 systemd[1]: cri-containerd-cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d.scope: Deactivated successfully. Jun 25 16:30:29.004446 systemd[1]: cri-containerd-cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d.scope: Consumed 5.746s CPU time. Jun 25 16:30:29.010000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:30:29.016682 kernel: audit: type=1334 audit(1719333029.010:924): prog-id=144 op=UNLOAD Jun 25 16:30:29.032438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d-rootfs.mount: Deactivated successfully. Jun 25 16:30:29.034473 containerd[1477]: time="2024-06-25T16:30:29.034418268Z" level=info msg="shim disconnected" id=cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d namespace=k8s.io Jun 25 16:30:29.034905 containerd[1477]: time="2024-06-25T16:30:29.034767773Z" level=warning msg="cleaning up after shim disconnected" id=cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d namespace=k8s.io Jun 25 16:30:29.034905 containerd[1477]: time="2024-06-25T16:30:29.034793173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:29.282069 kubelet[2831]: I0625 16:30:29.281426 2831 scope.go:117] "RemoveContainer" containerID="cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d" Jun 25 16:30:29.284274 containerd[1477]: time="2024-06-25T16:30:29.284235339Z" level=info msg="CreateContainer within sandbox \"2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 16:30:29.319682 containerd[1477]: time="2024-06-25T16:30:29.319633217Z" level=info msg="CreateContainer within sandbox \"2ad328ca51b1d07c1911f7a5bbeae8dbd5a2fb2c27f59c93f51442df59f8edc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16\"" Jun 25 16:30:29.320161 containerd[1477]: time="2024-06-25T16:30:29.320127924Z" level=info msg="StartContainer for \"6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16\"" Jun 25 16:30:29.356751 systemd[1]: Started cri-containerd-6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16.scope - libcontainer container 6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16. Jun 25 16:30:29.367000 audit: BPF prog-id=235 op=LOAD Jun 25 16:30:29.368000 audit: BPF prog-id=236 op=LOAD Jun 25 16:30:29.374051 kernel: audit: type=1334 audit(1719333029.367:925): prog-id=235 op=LOAD Jun 25 16:30:29.374165 kernel: audit: type=1334 audit(1719333029.368:926): prog-id=236 op=LOAD Jun 25 16:30:29.368000 audit[6291]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3114 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:29.384350 kernel: audit: type=1300 audit(1719333029.368:926): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3114 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:29.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237633237356565356637383636643566653533363031343330 Jun 25 16:30:29.395391 kernel: audit: type=1327 audit(1719333029.368:926): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237633237356565356637383636643566653533363031343330 Jun 25 16:30:29.368000 audit: BPF prog-id=237 op=LOAD Jun 25 16:30:29.399291 kernel: audit: type=1334 audit(1719333029.368:927): prog-id=237 op=LOAD Jun 25 16:30:29.368000 audit[6291]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3114 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:29.409803 kernel: audit: type=1300 audit(1719333029.368:927): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3114 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:29.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237633237356565356637383636643566653533363031343330 Jun 25 16:30:29.427576 kernel: audit: type=1327 audit(1719333029.368:927): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237633237356565356637383636643566653533363031343330 Jun 25 16:30:29.427685 kernel: audit: type=1334 audit(1719333029.368:928): prog-id=237 op=UNLOAD Jun 25 16:30:29.368000 audit: BPF prog-id=237 op=UNLOAD Jun 25 16:30:29.368000 audit: BPF prog-id=236 op=UNLOAD Jun 25 16:30:29.368000 audit: BPF prog-id=238 op=LOAD Jun 25 16:30:29.368000 audit[6291]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3114 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:29.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237633237356565356637383636643566653533363031343330 Jun 25 16:30:29.433454 containerd[1477]: time="2024-06-25T16:30:29.433410152Z" level=info msg="StartContainer for \"6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16\" returns successfully" Jun 25 16:30:29.759491 systemd[1]: cri-containerd-28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0.scope: Deactivated successfully. Jun 25 16:30:29.759923 systemd[1]: cri-containerd-28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0.scope: Consumed 4.119s CPU time. Jun 25 16:30:29.762000 audit: BPF prog-id=105 op=UNLOAD Jun 25 16:30:29.762000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:30:29.786826 containerd[1477]: time="2024-06-25T16:30:29.786761521Z" level=info msg="shim disconnected" id=28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0 namespace=k8s.io Jun 25 16:30:29.786826 containerd[1477]: time="2024-06-25T16:30:29.786820822Z" level=warning msg="cleaning up after shim disconnected" id=28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0 namespace=k8s.io Jun 25 16:30:29.787087 containerd[1477]: time="2024-06-25T16:30:29.786832122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:30.033131 systemd[1]: run-containerd-runc-k8s.io-6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16-runc.TXyPne.mount: Deactivated successfully. Jun 25 16:30:30.033269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0-rootfs.mount: Deactivated successfully. Jun 25 16:30:30.286087 kubelet[2831]: I0625 16:30:30.285956 2831 scope.go:117] "RemoveContainer" containerID="28e2fba388f5ebf8d2d479f56d2884e2dff8bad69a9a991bb7a91294ecd2a3f0" Jun 25 16:30:30.289305 containerd[1477]: time="2024-06-25T16:30:30.289251946Z" level=info msg="CreateContainer within sandbox \"a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 16:30:30.330906 containerd[1477]: time="2024-06-25T16:30:30.330852799Z" level=info msg="CreateContainer within sandbox \"a6dd08a0a5c1ab090951fd77c84acda6def478387f43dc6bee2e2708fe91ac3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445\"" Jun 25 16:30:30.331463 containerd[1477]: time="2024-06-25T16:30:30.331427807Z" level=info msg="StartContainer for \"4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445\"" Jun 25 16:30:30.370788 systemd[1]: Started cri-containerd-4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445.scope - libcontainer container 4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445. Jun 25 16:30:30.384000 audit: BPF prog-id=239 op=LOAD Jun 25 16:30:30.384000 audit: BPF prog-id=240 op=LOAD Jun 25 16:30:30.384000 audit[6354]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2531 pid=6354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343635633538396239303431656634633031353462383432613033 Jun 25 16:30:30.385000 audit: BPF prog-id=241 op=LOAD Jun 25 16:30:30.385000 audit[6354]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2531 pid=6354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343635633538396239303431656634633031353462383432613033 Jun 25 16:30:30.385000 audit: BPF prog-id=241 op=UNLOAD Jun 25 16:30:30.385000 audit: BPF prog-id=240 op=UNLOAD Jun 25 16:30:30.385000 audit: BPF prog-id=242 op=LOAD Jun 25 16:30:30.385000 audit[6354]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2531 pid=6354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343635633538396239303431656634633031353462383432613033 Jun 25 16:30:30.423010 containerd[1477]: time="2024-06-25T16:30:30.422955924Z" level=info msg="StartContainer for \"4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445\" returns successfully" Jun 25 16:30:31.035256 systemd[1]: run-containerd-runc-k8s.io-4a465c589b9041ef4c0154b842a03f4458a7ac4165f68489f1a94ab4f8654445-runc.09kOnK.mount: Deactivated successfully. Jun 25 16:30:31.108551 systemd[1]: run-containerd-runc-k8s.io-a848a39e97b7af2625774db6241610c1720f68c6fb598d85d7d643ed3da65483-runc.j7Mll1.mount: Deactivated successfully. Jun 25 16:30:31.113957 kubelet[2831]: E0625 16:30:31.113722 2831 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:46434->10.200.8.23:2379: read: connection timed out" Jun 25 16:30:31.132458 systemd[1]: cri-containerd-d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea.scope: Deactivated successfully. Jun 25 16:30:31.132824 systemd[1]: cri-containerd-d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea.scope: Consumed 2.072s CPU time. Jun 25 16:30:31.136000 audit: BPF prog-id=109 op=UNLOAD Jun 25 16:30:31.136000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:30:31.185539 containerd[1477]: time="2024-06-25T16:30:31.185470529Z" level=info msg="shim disconnected" id=d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea namespace=k8s.io Jun 25 16:30:31.185718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea-rootfs.mount: Deactivated successfully. Jun 25 16:30:31.185986 containerd[1477]: time="2024-06-25T16:30:31.185960736Z" level=warning msg="cleaning up after shim disconnected" id=d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea namespace=k8s.io Jun 25 16:30:31.186115 containerd[1477]: time="2024-06-25T16:30:31.186098338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:31.298239 kubelet[2831]: I0625 16:30:31.297023 2831 scope.go:117] "RemoveContainer" containerID="d83cd760dd23a8ddac77c2f0d15ff25baab97cd9449f68b77a473a2983603eea" Jun 25 16:30:31.300877 containerd[1477]: time="2024-06-25T16:30:31.300838741Z" level=info msg="CreateContainer within sandbox \"78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 16:30:31.341273 containerd[1477]: time="2024-06-25T16:30:31.341214271Z" level=info msg="CreateContainer within sandbox \"78d3b90bae535a10914cf4804d70fd006a635cb39b45e0209be323d4b0b63921\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3aae3607f3c2891ce9c730daef3b400419c525525771164bfdf0d5562c058b57\"" Jun 25 16:30:31.341841 containerd[1477]: time="2024-06-25T16:30:31.341803378Z" level=info msg="StartContainer for \"3aae3607f3c2891ce9c730daef3b400419c525525771164bfdf0d5562c058b57\"" Jun 25 16:30:31.377787 systemd[1]: Started cri-containerd-3aae3607f3c2891ce9c730daef3b400419c525525771164bfdf0d5562c058b57.scope - libcontainer container 3aae3607f3c2891ce9c730daef3b400419c525525771164bfdf0d5562c058b57. Jun 25 16:30:31.405000 audit: BPF prog-id=243 op=LOAD Jun 25 16:30:31.406000 audit: BPF prog-id=244 op=LOAD Jun 25 16:30:31.406000 audit[6440]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2534 pid=6440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:31.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361616533363037663363323839316365396337333064616566336234 Jun 25 16:30:31.406000 audit: BPF prog-id=245 op=LOAD Jun 25 16:30:31.406000 audit[6440]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2534 pid=6440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:31.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361616533363037663363323839316365396337333064616566336234 Jun 25 16:30:31.406000 audit: BPF prog-id=245 op=UNLOAD Jun 25 16:30:31.407000 audit: BPF prog-id=244 op=UNLOAD Jun 25 16:30:31.407000 audit: BPF prog-id=246 op=LOAD Jun 25 16:30:31.407000 audit[6440]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2534 pid=6440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:31.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361616533363037663363323839316365396337333064616566336234 Jun 25 16:30:31.447267 containerd[1477]: time="2024-06-25T16:30:31.447208860Z" level=info msg="StartContainer for \"3aae3607f3c2891ce9c730daef3b400419c525525771164bfdf0d5562c058b57\" returns successfully" Jun 25 16:30:31.706000 audit[6367]: AVC avc: denied { watch } for pid=6367 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:31.706000 audit[6367]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0006d60c0 a2=fc6 a3=0 items=0 ppid=2531 pid=6367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:31.706000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:31.707000 audit[6367]: AVC avc: denied { watch } for pid=6367 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c551,c614 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:31.707000 audit[6367]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000172240 a2=fc6 a3=0 items=0 ppid=2531 pid=6367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c551,c614 key=(null) Jun 25 16:30:31.707000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:33.671889 kubelet[2831]: E0625 16:30:33.671762 2831 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3815.2.4-a-a46e2cd05c.17dc4c467343ec51", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3815.2.4-a-a46e2cd05c", UID:"a617164dee56ad05f4096a408f719e57", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-a46e2cd05c"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 30, 23, 213939793, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 30, 23, 213939793, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-a46e2cd05c"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:46252->10.200.8.23:2379: read: connection timed out' (will not retry!) Jun 25 16:30:39.751885 kubelet[2831]: I0625 16:30:39.751832 2831 status_manager.go:853] "Failed to get status for pod" podUID="11fcd378-70a1-49b9-ae12-a00650cba1f5" pod="tigera-operator/tigera-operator-76c4974c85-lkzzs" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:46378->10.200.8.23:2379: read: connection timed out" Jun 25 16:30:40.907609 systemd[1]: cri-containerd-6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16.scope: Deactivated successfully. Jun 25 16:30:40.906000 audit: BPF prog-id=235 op=UNLOAD Jun 25 16:30:40.911671 kernel: kauditd_printk_skb: 38 callbacks suppressed Jun 25 16:30:40.911774 kernel: audit: type=1334 audit(1719333040.906:949): prog-id=235 op=UNLOAD Jun 25 16:30:40.915000 audit: BPF prog-id=238 op=UNLOAD Jun 25 16:30:40.920620 kernel: audit: type=1334 audit(1719333040.915:950): prog-id=238 op=UNLOAD Jun 25 16:30:40.935484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16-rootfs.mount: Deactivated successfully. Jun 25 16:30:40.974852 containerd[1477]: time="2024-06-25T16:30:40.974783881Z" level=info msg="shim disconnected" id=6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16 namespace=k8s.io Jun 25 16:30:40.974852 containerd[1477]: time="2024-06-25T16:30:40.974848382Z" level=warning msg="cleaning up after shim disconnected" id=6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16 namespace=k8s.io Jun 25 16:30:40.974852 containerd[1477]: time="2024-06-25T16:30:40.974859082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:40.987239 containerd[1477]: time="2024-06-25T16:30:40.987191124Z" level=warning msg="cleanup warnings time=\"2024-06-25T16:30:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 16:30:41.115277 kubelet[2831]: E0625 16:30:41.114979 2831 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-a46e2cd05c?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 16:30:41.324989 kubelet[2831]: I0625 16:30:41.323790 2831 scope.go:117] "RemoveContainer" containerID="cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d" Jun 25 16:30:41.324989 kubelet[2831]: I0625 16:30:41.324260 2831 scope.go:117] "RemoveContainer" containerID="6efb7c275ee5f7866d5fe53601430b408684bc91256a59295fbdf3348fa11a16" Jun 25 16:30:41.324989 kubelet[2831]: E0625 16:30:41.324715 2831 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-lkzzs_tigera-operator(11fcd378-70a1-49b9-ae12-a00650cba1f5)\"" pod="tigera-operator/tigera-operator-76c4974c85-lkzzs" podUID="11fcd378-70a1-49b9-ae12-a00650cba1f5" Jun 25 16:30:41.326201 containerd[1477]: time="2024-06-25T16:30:41.326139969Z" level=info msg="RemoveContainer for \"cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d\"" Jun 25 16:30:41.332256 containerd[1477]: time="2024-06-25T16:30:41.332217538Z" level=info msg="RemoveContainer for \"cbd5e3950df3313a5490181ad6c4d825c6e541e07fe6d828992f72d81139303d\" returns successfully" Jun 25 16:30:44.336631 systemd[1]: run-containerd-runc-k8s.io-a63f74c933c9fd350e6597c962ecb70c6b075343ad310a7c35d858cf66732c0e-runc.byu5RQ.mount: Deactivated successfully.