Jun 25 16:31:20.303524 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:31:20.303547 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:20.303560 kernel: BIOS-provided physical RAM map: Jun 25 16:31:20.303568 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 16:31:20.303575 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 25 16:31:20.303582 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 25 16:31:20.303592 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jun 25 16:31:20.303599 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 25 16:31:20.303607 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jun 25 16:31:20.303614 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jun 25 16:31:20.303624 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jun 25 16:31:20.303631 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jun 25 16:31:20.303639 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jun 25 16:31:20.303646 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jun 25 16:31:20.303656 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jun 25 16:31:20.303666 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 25 16:31:20.303674 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jun 25 16:31:20.303682 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jun 25 16:31:20.303690 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 25 16:31:20.303698 kernel: NX (Execute Disable) protection: active Jun 25 16:31:20.303706 kernel: efi: EFI v2.70 by EDK II Jun 25 16:31:20.303714 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 Jun 25 16:31:20.303722 kernel: SMBIOS 2.8 present. Jun 25 16:31:20.303730 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jun 25 16:31:20.303738 kernel: Hypervisor detected: KVM Jun 25 16:31:20.303745 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:31:20.303753 kernel: kvm-clock: using sched offset of 5858501408 cycles Jun 25 16:31:20.303764 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:31:20.303781 kernel: tsc: Detected 2794.750 MHz processor Jun 25 16:31:20.303790 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:31:20.303798 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:31:20.303807 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jun 25 16:31:20.303815 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:31:20.303823 kernel: Using GB pages for direct mapping Jun 25 16:31:20.303832 kernel: Secure boot disabled Jun 25 16:31:20.303842 kernel: ACPI: Early table checksum verification disabled Jun 25 16:31:20.303850 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jun 25 16:31:20.303859 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jun 25 16:31:20.303867 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:31:20.303876 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:31:20.303888 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jun 25 16:31:20.303897 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:31:20.303907 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:31:20.303916 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:31:20.303925 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 25 16:31:20.303934 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jun 25 16:31:20.303943 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jun 25 16:31:20.303952 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jun 25 16:31:20.303961 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jun 25 16:31:20.303971 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jun 25 16:31:20.303993 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jun 25 16:31:20.304002 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jun 25 16:31:20.304011 kernel: No NUMA configuration found Jun 25 16:31:20.304020 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jun 25 16:31:20.304029 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jun 25 16:31:20.304038 kernel: Zone ranges: Jun 25 16:31:20.304047 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:31:20.304056 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jun 25 16:31:20.304064 kernel: Normal empty Jun 25 16:31:20.304075 kernel: Movable zone start for each node Jun 25 16:31:20.304084 kernel: Early memory node ranges Jun 25 16:31:20.304093 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 16:31:20.304102 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jun 25 16:31:20.304111 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jun 25 16:31:20.304120 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jun 25 16:31:20.304128 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jun 25 16:31:20.304137 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jun 25 16:31:20.304146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jun 25 16:31:20.304157 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:31:20.304166 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 16:31:20.304175 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jun 25 16:31:20.304184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:31:20.304192 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jun 25 16:31:20.304201 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 25 16:31:20.304210 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jun 25 16:31:20.304219 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:31:20.304228 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:31:20.304238 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:31:20.304248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:31:20.304256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:31:20.304265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:31:20.304274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:31:20.304283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:31:20.304291 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:31:20.304300 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:31:20.304309 kernel: TSC deadline timer available Jun 25 16:31:20.304318 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 16:31:20.304329 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 16:31:20.304338 kernel: kvm-guest: setup PV sched yield Jun 25 16:31:20.304346 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jun 25 16:31:20.304355 kernel: Booting paravirtualized kernel on KVM Jun 25 16:31:20.304364 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:31:20.304374 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 16:31:20.304383 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jun 25 16:31:20.304392 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jun 25 16:31:20.304400 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 16:31:20.304411 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:31:20.304420 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:31:20.304428 kernel: Fallback order for Node 0: 0 Jun 25 16:31:20.304437 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jun 25 16:31:20.304446 kernel: Policy zone: DMA32 Jun 25 16:31:20.304457 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:20.304466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:31:20.304475 kernel: random: crng init done Jun 25 16:31:20.304486 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:31:20.304495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:31:20.304504 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:31:20.304515 kernel: Memory: 2392584K/2567000K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 174156K reserved, 0K cma-reserved) Jun 25 16:31:20.304526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 16:31:20.304537 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:31:20.304548 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:31:20.304559 kernel: Dynamic Preempt: voluntary Jun 25 16:31:20.304569 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:31:20.304584 kernel: rcu: RCU event tracing is enabled. Jun 25 16:31:20.304595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 16:31:20.304606 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:31:20.304618 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:31:20.304629 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:31:20.304649 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:31:20.304661 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 16:31:20.304670 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 16:31:20.304680 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:31:20.304689 kernel: Console: colour dummy device 80x25 Jun 25 16:31:20.304698 kernel: printk: console [ttyS0] enabled Jun 25 16:31:20.304706 kernel: ACPI: Core revision 20220331 Jun 25 16:31:20.304718 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:31:20.304727 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:31:20.304736 kernel: x2apic enabled Jun 25 16:31:20.304746 kernel: Switched APIC routing to physical x2apic. Jun 25 16:31:20.304755 kernel: kvm-guest: setup PV IPIs Jun 25 16:31:20.304774 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:31:20.304784 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 16:31:20.304793 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 16:31:20.304802 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 16:31:20.304812 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 16:31:20.304821 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 16:31:20.304830 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:31:20.304839 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:31:20.304848 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:31:20.304859 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:31:20.304868 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 16:31:20.304878 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 16:31:20.304887 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:31:20.304896 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:31:20.304905 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:31:20.304915 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:31:20.304924 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:31:20.304933 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:31:20.304944 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:31:20.304953 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:31:20.304962 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:31:20.304971 kernel: LSM: Security Framework initializing Jun 25 16:31:20.304992 kernel: SELinux: Initializing. Jun 25 16:31:20.305002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:31:20.305011 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:31:20.305020 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 16:31:20.305032 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:20.305041 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:31:20.305051 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:20.305060 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:31:20.305069 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:20.305078 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:31:20.305087 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 16:31:20.305095 kernel: ... version: 0 Jun 25 16:31:20.305105 kernel: ... bit width: 48 Jun 25 16:31:20.305114 kernel: ... generic registers: 6 Jun 25 16:31:20.305125 kernel: ... value mask: 0000ffffffffffff Jun 25 16:31:20.305134 kernel: ... max period: 00007fffffffffff Jun 25 16:31:20.305143 kernel: ... fixed-purpose events: 0 Jun 25 16:31:20.305152 kernel: ... event mask: 000000000000003f Jun 25 16:31:20.305161 kernel: signal: max sigframe size: 1776 Jun 25 16:31:20.305170 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:31:20.305180 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:31:20.305189 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:31:20.305198 kernel: x86: Booting SMP configuration: Jun 25 16:31:20.305207 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 16:31:20.305218 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 16:31:20.305227 kernel: smpboot: Max logical packages: 1 Jun 25 16:31:20.305236 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 16:31:20.305245 kernel: devtmpfs: initialized Jun 25 16:31:20.305254 kernel: x86/mm: Memory block size: 128MB Jun 25 16:31:20.305264 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jun 25 16:31:20.305273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jun 25 16:31:20.305282 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jun 25 16:31:20.305291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jun 25 16:31:20.305303 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jun 25 16:31:20.305312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:31:20.305321 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 16:31:20.305330 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:31:20.305340 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:31:20.305349 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:31:20.305358 kernel: audit: type=2000 audit(1719333078.986:1): state=initialized audit_enabled=0 res=1 Jun 25 16:31:20.305367 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:31:20.305376 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:31:20.305387 kernel: cpuidle: using governor menu Jun 25 16:31:20.305396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:31:20.305405 kernel: dca service started, version 1.12.1 Jun 25 16:31:20.305414 kernel: PCI: Using configuration type 1 for base access Jun 25 16:31:20.305424 kernel: PCI: Using configuration type 1 for extended access Jun 25 16:31:20.305433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:31:20.305442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:31:20.305451 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:31:20.305460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:31:20.305471 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:31:20.305480 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:31:20.305489 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:31:20.305498 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:31:20.305508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:31:20.305517 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:31:20.305526 kernel: ACPI: Interpreter enabled Jun 25 16:31:20.305536 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:31:20.305545 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:31:20.305556 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:31:20.305565 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:31:20.305574 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:31:20.305583 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:31:20.305742 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:31:20.305759 kernel: acpiphp: Slot [3] registered Jun 25 16:31:20.305777 kernel: acpiphp: Slot [4] registered Jun 25 16:31:20.305786 kernel: acpiphp: Slot [5] registered Jun 25 16:31:20.305798 kernel: acpiphp: Slot [6] registered Jun 25 16:31:20.305807 kernel: acpiphp: Slot [7] registered Jun 25 16:31:20.305817 kernel: acpiphp: Slot [8] registered Jun 25 16:31:20.305826 kernel: acpiphp: Slot [9] registered Jun 25 16:31:20.305835 kernel: acpiphp: Slot [10] registered Jun 25 16:31:20.305843 kernel: acpiphp: Slot [11] registered Jun 25 16:31:20.305853 kernel: acpiphp: Slot [12] registered Jun 25 16:31:20.305862 kernel: acpiphp: Slot [13] registered Jun 25 16:31:20.305871 kernel: acpiphp: Slot [14] registered Jun 25 16:31:20.305882 kernel: acpiphp: Slot [15] registered Jun 25 16:31:20.305891 kernel: acpiphp: Slot [16] registered Jun 25 16:31:20.305900 kernel: acpiphp: Slot [17] registered Jun 25 16:31:20.305909 kernel: acpiphp: Slot [18] registered Jun 25 16:31:20.305918 kernel: acpiphp: Slot [19] registered Jun 25 16:31:20.305927 kernel: acpiphp: Slot [20] registered Jun 25 16:31:20.305936 kernel: acpiphp: Slot [21] registered Jun 25 16:31:20.305945 kernel: acpiphp: Slot [22] registered Jun 25 16:31:20.305954 kernel: acpiphp: Slot [23] registered Jun 25 16:31:20.305963 kernel: acpiphp: Slot [24] registered Jun 25 16:31:20.305974 kernel: acpiphp: Slot [25] registered Jun 25 16:31:20.305996 kernel: acpiphp: Slot [26] registered Jun 25 16:31:20.306005 kernel: acpiphp: Slot [27] registered Jun 25 16:31:20.306014 kernel: acpiphp: Slot [28] registered Jun 25 16:31:20.306023 kernel: acpiphp: Slot [29] registered Jun 25 16:31:20.306032 kernel: acpiphp: Slot [30] registered Jun 25 16:31:20.306040 kernel: acpiphp: Slot [31] registered Jun 25 16:31:20.306049 kernel: PCI host bridge to bus 0000:00 Jun 25 16:31:20.306165 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:31:20.306261 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:31:20.306349 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:31:20.306436 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 16:31:20.306522 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jun 25 16:31:20.306608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:31:20.306729 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:31:20.306856 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:31:20.306966 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:31:20.307083 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 16:31:20.307183 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:31:20.307281 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:31:20.307380 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:31:20.307477 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:31:20.307591 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:31:20.307691 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:31:20.307803 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 16:31:20.307911 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 16:31:20.308027 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jun 25 16:31:20.308125 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jun 25 16:31:20.308228 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 25 16:31:20.308326 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jun 25 16:31:20.308425 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:31:20.308533 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:31:20.308635 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 16:31:20.308738 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jun 25 16:31:20.308850 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jun 25 16:31:20.308972 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:31:20.309089 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:31:20.309189 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jun 25 16:31:20.309288 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jun 25 16:31:20.309396 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:31:20.309497 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 16:31:20.309597 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jun 25 16:31:20.309701 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jun 25 16:31:20.309811 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jun 25 16:31:20.309826 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:31:20.309835 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:31:20.309844 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:31:20.309854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:31:20.309863 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:31:20.309872 kernel: iommu: Default domain type: Translated Jun 25 16:31:20.309882 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:31:20.309894 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:31:20.309904 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:31:20.309914 kernel: PTP clock support registered Jun 25 16:31:20.309922 kernel: Registered efivars operations Jun 25 16:31:20.309932 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:31:20.309941 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:31:20.309950 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jun 25 16:31:20.309960 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jun 25 16:31:20.309969 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jun 25 16:31:20.309992 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jun 25 16:31:20.310093 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:31:20.310190 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:31:20.310288 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:31:20.310301 kernel: vgaarb: loaded Jun 25 16:31:20.310311 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:31:20.310320 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:31:20.310330 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:31:20.310342 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:31:20.310352 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:31:20.310362 kernel: pnp: PnP ACPI init Jun 25 16:31:20.310464 kernel: pnp 00:02: [dma 2] Jun 25 16:31:20.310478 kernel: pnp: PnP ACPI: found 6 devices Jun 25 16:31:20.310488 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:31:20.310498 kernel: NET: Registered PF_INET protocol family Jun 25 16:31:20.310507 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:31:20.310516 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 16:31:20.310529 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:31:20.310538 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:31:20.310548 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 16:31:20.310557 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 16:31:20.310567 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:31:20.310576 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:31:20.310585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:31:20.310594 kernel: NET: Registered PF_XDP protocol family Jun 25 16:31:20.310696 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jun 25 16:31:20.310805 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jun 25 16:31:20.310896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:31:20.310998 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:31:20.311087 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:31:20.311175 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 16:31:20.311262 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jun 25 16:31:20.311362 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:31:20.311466 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:31:20.311480 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:31:20.311490 kernel: Initialise system trusted keyrings Jun 25 16:31:20.311499 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 16:31:20.311508 kernel: Key type asymmetric registered Jun 25 16:31:20.311517 kernel: Asymmetric key parser 'x509' registered Jun 25 16:31:20.311526 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:31:20.311536 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:31:20.311545 kernel: io scheduler mq-deadline registered Jun 25 16:31:20.311556 kernel: io scheduler kyber registered Jun 25 16:31:20.311566 kernel: io scheduler bfq registered Jun 25 16:31:20.311575 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:31:20.311585 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:31:20.311594 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 16:31:20.311603 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:31:20.311613 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:31:20.311622 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:31:20.311632 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:31:20.311643 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:31:20.311652 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:31:20.311674 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:31:20.311788 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 16:31:20.311882 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 16:31:20.311971 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T16:31:19 UTC (1719333079) Jun 25 16:31:20.312077 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 16:31:20.312094 kernel: efifb: probing for efifb Jun 25 16:31:20.312105 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jun 25 16:31:20.312114 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jun 25 16:31:20.312124 kernel: efifb: scrolling: redraw Jun 25 16:31:20.312133 kernel: hpet: Lost 2 RTC interrupts Jun 25 16:31:20.312144 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jun 25 16:31:20.312153 kernel: Console: switching to colour frame buffer device 100x37 Jun 25 16:31:20.312163 kernel: fb0: EFI VGA frame buffer device Jun 25 16:31:20.312172 kernel: pstore: Registered efi as persistent store backend Jun 25 16:31:20.312184 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:31:20.312194 kernel: Segment Routing with IPv6 Jun 25 16:31:20.312203 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:31:20.312213 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:31:20.312222 kernel: Key type dns_resolver registered Jun 25 16:31:20.312232 kernel: IPI shorthand broadcast: enabled Jun 25 16:31:20.312241 kernel: sched_clock: Marking stable (959635885, 176463765)->(1237386959, -101287309) Jun 25 16:31:20.312251 kernel: registered taskstats version 1 Jun 25 16:31:20.312263 kernel: Loading compiled-in X.509 certificates Jun 25 16:31:20.312273 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:31:20.312283 kernel: Key type .fscrypt registered Jun 25 16:31:20.312292 kernel: Key type fscrypt-provisioning registered Jun 25 16:31:20.312304 kernel: pstore: Using crash dump compression: deflate Jun 25 16:31:20.312314 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:31:20.312324 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:31:20.312335 kernel: ima: No architecture policies found Jun 25 16:31:20.312344 kernel: clk: Disabling unused clocks Jun 25 16:31:20.312354 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:31:20.312364 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:31:20.312374 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:31:20.312383 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:31:20.312393 kernel: Run /init as init process Jun 25 16:31:20.312402 kernel: with arguments: Jun 25 16:31:20.312412 kernel: /init Jun 25 16:31:20.312423 kernel: with environment: Jun 25 16:31:20.312432 kernel: HOME=/ Jun 25 16:31:20.312442 kernel: TERM=linux Jun 25 16:31:20.312451 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:31:20.312464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:31:20.312476 systemd[1]: Detected virtualization kvm. Jun 25 16:31:20.312487 systemd[1]: Detected architecture x86-64. Jun 25 16:31:20.312500 systemd[1]: Running in initrd. Jun 25 16:31:20.312510 systemd[1]: No hostname configured, using default hostname. Jun 25 16:31:20.312520 systemd[1]: Hostname set to . Jun 25 16:31:20.312531 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:31:20.312541 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:31:20.312552 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:20.312562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:20.312573 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:31:20.312585 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:31:20.312595 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:31:20.312605 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:31:20.312616 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:31:20.312626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:31:20.312637 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:31:20.312647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:31:20.312660 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:31:20.312670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:20.312681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:20.312691 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:20.312702 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:31:20.312712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:31:20.312723 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:31:20.312733 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:31:20.312744 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:31:20.312756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:31:20.312776 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:31:20.312786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:20.312800 systemd-journald[194]: Journal started Jun 25 16:31:20.312849 systemd-journald[194]: Runtime Journal (/run/log/journal/60e511aa24ed4201925db8201f6b3d55) is 6.0M, max 48.3M, 42.3M free. Jun 25 16:31:20.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.325007 kernel: audit: type=1130 audit(1719333080.321:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.325027 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:31:20.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.326314 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:31:20.330068 kernel: audit: type=1130 audit(1719333080.325:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.328388 systemd-modules-load[195]: Inserted module 'overlay' Jun 25 16:31:20.336428 kernel: audit: type=1130 audit(1719333080.327:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.336441 kernel: audit: type=1130 audit(1719333080.329:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.328867 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:31:20.344139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:31:20.377998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:31:20.380232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:31:20.386004 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:31:20.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.387511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:20.431451 kernel: audit: type=1130 audit(1719333080.387:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.431469 kernel: Bridge firewalling registered Jun 25 16:31:20.388940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:20.430974 systemd-modules-load[195]: Inserted module 'br_netfilter' Jun 25 16:31:20.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.434430 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:31:20.441929 kernel: audit: type=1130 audit(1719333080.433:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.441948 kernel: audit: type=1130 audit(1719333080.437:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.447143 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:31:20.486753 dracut-cmdline[213]: dracut-dracut-053 Jun 25 16:31:20.486753 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jun 25 16:31:20.486753 dracut-cmdline[213]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:20.501290 kernel: SCSI subsystem initialized Jun 25 16:31:20.501316 kernel: audit: type=1334 audit(1719333080.487:9): prog-id=6 op=LOAD Jun 25 16:31:20.487000 audit: BPF prog-id=6 op=LOAD Jun 25 16:31:20.500656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:31:20.529290 systemd-resolved[278]: Positive Trust Anchors: Jun 25 16:31:20.544602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:31:20.544626 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:31:20.544637 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:31:20.529460 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:31:20.529489 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:31:20.531664 systemd-resolved[278]: Defaulting to hostname 'linux'. Jun 25 16:31:20.532376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:31:20.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.603110 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:20.604442 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:31:20.606461 systemd-modules-load[195]: Inserted module 'dm_multipath' Jun 25 16:31:20.607701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:20.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.658010 kernel: iscsi: registered transport (tcp) Jun 25 16:31:20.659124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:31:20.700914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:20.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.739021 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:31:20.739056 kernel: QLogic iSCSI HBA Driver Jun 25 16:31:20.777352 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:31:20.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:20.788226 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:31:20.884023 kernel: raid6: avx2x4 gen() 24250 MB/s Jun 25 16:31:20.916030 kernel: raid6: avx2x2 gen() 25001 MB/s Jun 25 16:31:20.946289 kernel: raid6: avx2x1 gen() 20833 MB/s Jun 25 16:31:20.946321 kernel: raid6: using algorithm avx2x2 gen() 25001 MB/s Jun 25 16:31:20.973016 kernel: raid6: .... xor() 16387 MB/s, rmw enabled Jun 25 16:31:20.973059 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:31:20.977014 kernel: xor: automatically using best checksumming function avx Jun 25 16:31:21.139031 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:31:21.147884 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:31:21.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:21.178000 audit: BPF prog-id=7 op=LOAD Jun 25 16:31:21.178000 audit: BPF prog-id=8 op=LOAD Jun 25 16:31:21.196202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:21.225098 systemd-udevd[397]: Using default interface naming scheme 'v252'. Jun 25 16:31:21.231641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:21.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:21.266498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:31:21.281757 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jun 25 16:31:21.313612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:31:21.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:21.327157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:31:21.360145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:21.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:21.386005 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 16:31:21.397192 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 16:31:21.397293 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:31:21.397305 kernel: GPT:9289727 != 19775487 Jun 25 16:31:21.397320 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:31:21.397329 kernel: GPT:9289727 != 19775487 Jun 25 16:31:21.397339 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:31:21.397348 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:31:21.397358 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:31:21.415044 kernel: libata version 3.00 loaded. Jun 25 16:31:21.417017 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:31:21.429680 kernel: scsi host0: ata_piix Jun 25 16:31:21.429820 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:31:21.429834 kernel: AES CTR mode by8 optimization enabled Jun 25 16:31:21.429842 kernel: scsi host1: ata_piix Jun 25 16:31:21.429924 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 16:31:21.429933 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 16:31:21.429943 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (455) Jun 25 16:31:21.436002 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Jun 25 16:31:21.437121 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:31:21.441497 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:31:21.450437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:31:21.450957 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:31:21.458105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:31:21.468121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:31:21.614786 kernel: ata2: found unknown device (class 0) Jun 25 16:31:21.614859 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 16:31:21.617082 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 16:31:21.668441 disk-uuid[520]: Primary Header is updated. Jun 25 16:31:21.668441 disk-uuid[520]: Secondary Entries is updated. Jun 25 16:31:21.668441 disk-uuid[520]: Secondary Header is updated. Jun 25 16:31:21.674040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:31:21.679026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:31:21.736234 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 16:31:21.759054 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:31:21.759078 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:31:22.725034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:31:22.725421 disk-uuid[533]: The operation has completed successfully. Jun 25 16:31:22.748656 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:31:22.748755 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:31:22.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:22.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:22.776139 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:31:22.800344 sh[548]: Success Jun 25 16:31:22.849020 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 16:31:22.876195 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:31:22.906557 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:31:22.921135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:31:22.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:22.946045 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:31:22.946097 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:22.946110 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:31:22.947220 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:31:22.948109 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:31:22.975077 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:31:22.975746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:31:23.007189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:31:23.009313 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:31:23.017673 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:23.017741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:23.017755 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:31:23.024233 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:31:23.026405 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:23.076063 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:31:23.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.079000 audit: BPF prog-id=9 op=LOAD Jun 25 16:31:23.090288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:31:23.115952 systemd-networkd[727]: lo: Link UP Jun 25 16:31:23.115966 systemd-networkd[727]: lo: Gained carrier Jun 25 16:31:23.116374 systemd-networkd[727]: Enumeration completed Jun 25 16:31:23.116569 systemd-networkd[727]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:31:23.116572 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:31:23.122385 systemd-networkd[727]: eth0: Link UP Jun 25 16:31:23.122388 systemd-networkd[727]: eth0: Gained carrier Jun 25 16:31:23.122392 systemd-networkd[727]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:31:23.122497 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:31:23.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.135169 systemd[1]: Reached target network.target - Network. Jun 25 16:31:23.140159 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:31:23.155128 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:31:23.174429 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:31:23.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.177304 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:31:23.180283 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:31:23.180283 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:31:23.180283 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:31:23.180283 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:31:23.180283 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:31:23.180283 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:31:23.205515 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:31:23.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.238236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:31:23.253041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:31:23.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.255356 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:31:23.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.257596 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:31:23.259912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:23.280713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:31:23.292163 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:31:23.307124 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:31:23.320896 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:31:23.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.351606 ignition[743]: Ignition 2.15.0 Jun 25 16:31:23.351615 ignition[743]: Stage: fetch-offline Jun 25 16:31:23.351648 ignition[743]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:23.351656 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:23.351745 ignition[743]: parsed url from cmdline: "" Jun 25 16:31:23.351748 ignition[743]: no config URL provided Jun 25 16:31:23.351753 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:31:23.351760 ignition[743]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:31:23.351782 ignition[743]: op(1): [started] loading QEMU firmware config module Jun 25 16:31:23.351786 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 16:31:23.387461 ignition[743]: op(1): [finished] loading QEMU firmware config module Jun 25 16:31:23.447536 ignition[743]: parsing config with SHA512: 54fb2fc012670e1a7abe652b337b86758d6194f3abd4b48098d4bd72bf556c35096178c78436818e6f52c48abfd2f357b41518212224747c5c52869a9714282f Jun 25 16:31:23.451293 unknown[743]: fetched base config from "system" Jun 25 16:31:23.451312 unknown[743]: fetched user config from "qemu" Jun 25 16:31:23.457946 ignition[743]: fetch-offline: fetch-offline passed Jun 25 16:31:23.458883 ignition[743]: Ignition finished successfully Jun 25 16:31:23.460041 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:31:23.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.460364 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:31:23.469184 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:31:23.493903 ignition[758]: Ignition 2.15.0 Jun 25 16:31:23.493914 ignition[758]: Stage: kargs Jun 25 16:31:23.494034 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:23.494043 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:23.494895 ignition[758]: kargs: kargs passed Jun 25 16:31:23.494931 ignition[758]: Ignition finished successfully Jun 25 16:31:23.519411 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:31:23.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.530179 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:31:23.553428 ignition[766]: Ignition 2.15.0 Jun 25 16:31:23.553437 ignition[766]: Stage: disks Jun 25 16:31:23.553549 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:23.553558 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:23.554527 ignition[766]: disks: disks passed Jun 25 16:31:23.554562 ignition[766]: Ignition finished successfully Jun 25 16:31:23.569474 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:31:23.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.571909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:31:23.572387 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:23.575460 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:31:23.577993 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:31:23.580116 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:31:23.597145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:31:23.613641 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:31:23.707426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:31:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.722186 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:31:23.825005 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:31:23.825058 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:31:23.825967 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:31:23.871088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:31:23.872886 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:31:23.874754 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:31:23.883722 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (782) Jun 25 16:31:23.883742 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:23.883751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:23.883759 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:31:23.874783 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:31:23.874803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:31:23.877835 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:31:23.884512 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:31:23.888348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:31:23.914329 initrd-setup-root[806]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:31:23.917735 initrd-setup-root[813]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:31:23.920697 initrd-setup-root[820]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:31:23.942638 initrd-setup-root[827]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:31:23.994396 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:31:24.013973 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:31:24.014014 kernel: audit: type=1130 audit(1719333083.993:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:23.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:24.016113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:31:24.017184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:31:24.021963 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:31:24.023535 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:24.049057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:31:24.071134 kernel: audit: type=1130 audit(1719333084.066:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:24.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:24.083935 ignition[896]: INFO : Ignition 2.15.0 Jun 25 16:31:24.083935 ignition[896]: INFO : Stage: mount Jun 25 16:31:24.097498 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:24.097498 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:24.097498 ignition[896]: INFO : mount: mount passed Jun 25 16:31:24.097498 ignition[896]: INFO : Ignition finished successfully Jun 25 16:31:24.101891 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:31:24.126442 kernel: audit: type=1130 audit(1719333084.102:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:24.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:24.134126 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:31:24.156537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:31:24.163136 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (905) Jun 25 16:31:24.163190 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:24.163204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:24.177619 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:31:24.180639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:31:24.204733 ignition[923]: INFO : Ignition 2.15.0 Jun 25 16:31:24.204733 ignition[923]: INFO : Stage: files Jun 25 16:31:24.208250 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:24.208250 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:24.208250 ignition[923]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:31:24.208250 ignition[923]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:31:24.208250 ignition[923]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:31:24.231812 ignition[923]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:31:24.231812 ignition[923]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:31:24.231812 ignition[923]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:31:24.231812 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:31:24.231812 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:31:24.209507 unknown[923]: wrote ssh authorized keys file for user: core Jun 25 16:31:24.254590 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:31:24.330244 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:31:24.330244 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:31:24.334421 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:31:24.336335 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:31:24.338400 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:31:24.340349 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:31:24.342385 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:31:24.369099 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:31:24.371476 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:31:24.373514 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:31:24.375602 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:31:24.377594 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:24.380432 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:24.400058 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:24.402436 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:31:24.819143 systemd-networkd[727]: eth0: Gained IPv6LL Jun 25 16:31:24.869156 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:31:25.276172 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:25.276172 ignition[923]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:31:25.298281 ignition[923]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:31:25.347525 kernel: audit: type=1130 audit(1719333085.322:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.347618 ignition[923]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:31:25.347618 ignition[923]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:31:25.347618 ignition[923]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:31:25.347618 ignition[923]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:31:25.347618 ignition[923]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:31:25.347618 ignition[923]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:31:25.347618 ignition[923]: INFO : files: files passed Jun 25 16:31:25.347618 ignition[923]: INFO : Ignition finished successfully Jun 25 16:31:25.320999 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:31:25.407065 kernel: audit: type=1130 audit(1719333085.399:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.407092 kernel: audit: type=1131 audit(1719333085.399:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.407117 kernel: audit: type=1130 audit(1719333085.406:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.379228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:31:25.395818 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:31:25.397285 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:31:25.416329 initrd-setup-root-after-ignition[948]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 16:31:25.397357 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:31:25.419444 initrd-setup-root-after-ignition[950]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:25.419444 initrd-setup-root-after-ignition[950]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:25.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.405276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:31:25.433727 kernel: audit: type=1130 audit(1719333085.424:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.433766 kernel: audit: type=1131 audit(1719333085.424:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.433848 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:25.407184 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:31:25.411540 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:31:25.422709 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:31:25.422797 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:31:25.424520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:31:25.431052 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:31:25.433715 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:31:25.443300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:31:25.453389 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:31:25.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.459005 kernel: audit: type=1130 audit(1719333085.452:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.464172 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:31:25.472879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:25.473437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:25.473849 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:31:25.479110 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:31:25.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.479277 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:31:25.480683 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:31:25.482849 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:31:25.485062 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:31:25.487499 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:31:25.489214 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:31:25.491969 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:31:25.494454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:31:25.498173 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:31:25.500786 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:31:25.503157 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:25.503746 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:31:25.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.506431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:31:25.506568 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:31:25.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.508507 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:25.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.510057 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:31:25.510170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:31:25.512286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:31:25.512399 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:31:25.514348 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:31:25.516529 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:31:25.522143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:25.522516 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:31:25.525846 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:31:25.527588 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:31:25.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.527730 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:31:25.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.529385 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:31:25.529491 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:31:25.547304 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:31:25.548067 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:31:25.550673 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:31:25.552182 iscsid[732]: iscsid shutting down. Jun 25 16:31:25.554160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:31:25.554852 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:25.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.566279 ignition[968]: INFO : Ignition 2.15.0 Jun 25 16:31:25.566279 ignition[968]: INFO : Stage: umount Jun 25 16:31:25.566279 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:25.566279 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:31:25.566279 ignition[968]: INFO : umount: umount passed Jun 25 16:31:25.566279 ignition[968]: INFO : Ignition finished successfully Jun 25 16:31:25.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.564485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:31:25.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.564584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:31:25.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.568221 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:31:25.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.568746 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:31:25.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.568820 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:31:25.570136 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:31:25.570203 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:31:25.594000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:31:25.572300 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:31:25.572364 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:31:25.573997 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:31:25.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.574037 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:31:25.575861 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:31:25.575907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:31:25.577036 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:31:25.577077 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:31:25.579212 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:31:25.579605 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:31:25.579679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:31:25.580562 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:31:25.580637 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:31:25.580822 systemd[1]: Stopped target network.target - Network. Jun 25 16:31:25.580948 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:31:25.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.580972 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:31:25.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.581370 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:31:25.581541 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:31:25.587965 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:31:25.588075 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:31:25.590330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:31:25.590374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:25.593240 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:31:25.595031 systemd-networkd[727]: eth0: DHCPv6 lease lost Jun 25 16:31:25.627000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:31:25.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.596313 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:31:25.596413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:31:25.598584 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:31:25.598634 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:25.611107 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:31:25.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.612912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:31:25.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.612953 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:31:25.614999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:31:25.615046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:25.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.616314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:31:25.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.616346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:25.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.618381 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:25.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.622675 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:31:25.626766 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:31:25.626852 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:31:25.633538 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:31:25.633656 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:25.634509 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:31:25.634584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:31:25.636353 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:31:25.636389 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:25.638280 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:31:25.638305 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:25.638596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:31:25.638649 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:31:25.642943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:31:25.642999 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:31:25.645289 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:31:25.645329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:31:25.647233 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:31:25.647269 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:31:25.673358 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:31:25.674021 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:31:25.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.674111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:25.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.677862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:31:25.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.677902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:25.678389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:31:25.678428 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:31:25.681642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:31:25.691080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:31:25.691163 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:31:25.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:25.691851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:31:25.702234 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:31:25.709125 systemd[1]: Switching root. Jun 25 16:31:25.727965 systemd-journald[194]: Journal stopped Jun 25 16:31:26.648872 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jun 25 16:31:26.648921 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:31:26.648939 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:31:26.648955 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:31:26.648967 kernel: SELinux: policy capability open_perms=1 Jun 25 16:31:26.648979 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:31:26.649012 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:31:26.649025 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:31:26.649037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:31:26.649052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:31:26.649064 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:31:26.649075 systemd[1]: Successfully loaded SELinux policy in 40.980ms. Jun 25 16:31:26.649092 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.731ms. Jun 25 16:31:26.649105 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:31:26.649114 systemd[1]: Detected virtualization kvm. Jun 25 16:31:26.649126 systemd[1]: Detected architecture x86-64. Jun 25 16:31:26.649136 systemd[1]: Detected first boot. Jun 25 16:31:26.649145 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:31:26.649154 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:31:26.649163 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:31:26.649176 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:31:26.649190 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:31:26.649206 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:31:26.649219 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:31:26.649232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:31:26.649245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:31:26.649255 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:31:26.649265 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:31:26.649275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:31:26.649284 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:31:26.649293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:26.649305 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:31:26.649314 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:31:26.649324 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:31:26.649335 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:31:26.649345 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:31:26.649354 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:31:26.649363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:31:26.649373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:26.649382 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:31:26.649393 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:31:26.649403 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:31:26.649412 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:31:26.649421 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:31:26.649434 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:31:26.649447 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:26.649460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:26.649473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:26.649489 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:31:26.649502 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:31:26.649513 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:31:26.649522 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:31:26.649533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:26.649542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:31:26.649551 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:31:26.649562 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:31:26.649573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:31:26.649583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:31:26.649606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:31:26.649620 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:31:26.649633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:26.649646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:31:26.649663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:26.649673 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:31:26.649682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:26.649693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:31:26.649703 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:31:26.649713 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:31:26.649731 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:31:26.649749 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:31:26.649762 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:31:26.649774 systemd[1]: systemd-journald.service: Consumed 1.207s CPU time. Jun 25 16:31:26.649783 kernel: fuse: init (API version 7.37) Jun 25 16:31:26.649795 kernel: loop: module loaded Jun 25 16:31:26.649804 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:31:26.649814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:31:26.649825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:31:26.649835 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:31:26.649844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:31:26.649857 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:31:26.649871 systemd[1]: Stopped verity-setup.service. Jun 25 16:31:26.649884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:26.649903 systemd-journald[1076]: Journal started Jun 25 16:31:26.649946 systemd-journald[1076]: Runtime Journal (/run/log/journal/60e511aa24ed4201925db8201f6b3d55) is 6.0M, max 48.3M, 42.3M free. Jun 25 16:31:25.790000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:31:26.075000 audit: BPF prog-id=10 op=LOAD Jun 25 16:31:26.075000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:31:26.075000 audit: BPF prog-id=11 op=LOAD Jun 25 16:31:26.075000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:31:26.468000 audit: BPF prog-id=12 op=LOAD Jun 25 16:31:26.468000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:31:26.468000 audit: BPF prog-id=13 op=LOAD Jun 25 16:31:26.468000 audit: BPF prog-id=14 op=LOAD Jun 25 16:31:26.468000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:31:26.468000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:31:26.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.478000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:31:26.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.621000 audit: BPF prog-id=15 op=LOAD Jun 25 16:31:26.621000 audit: BPF prog-id=16 op=LOAD Jun 25 16:31:26.621000 audit: BPF prog-id=17 op=LOAD Jun 25 16:31:26.621000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:31:26.621000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:31:26.646000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:31:26.646000 audit[1076]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffff10cf620 a2=4000 a3=7ffff10cf6bc items=0 ppid=1 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:26.646000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:31:26.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.460083 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:31:26.460093 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:31:26.470755 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:31:26.471136 systemd[1]: systemd-journald.service: Consumed 1.207s CPU time. Jun 25 16:31:26.654751 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:31:26.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.655355 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:31:26.656553 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:31:26.658003 kernel: ACPI: bus type drm_connector registered Jun 25 16:31:26.658430 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:31:26.659499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:31:26.660673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:31:26.661830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:31:26.662956 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:31:26.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.664238 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:26.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.665901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:31:26.666069 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:31:26.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.667739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:26.667886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:26.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.669528 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:31:26.669680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:31:26.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.671402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:26.671543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:26.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.673291 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:31:26.673434 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:31:26.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.675107 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:26.675255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:26.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.676877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:26.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.678608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:31:26.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.680238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:31:26.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.682168 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:31:26.693146 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:31:26.695601 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:31:26.696754 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:31:26.698363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:31:26.700749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:31:26.701961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:31:26.703218 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:31:26.704361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:31:26.705773 systemd-journald[1076]: Time spent on flushing to /var/log/journal/60e511aa24ed4201925db8201f6b3d55 is 20.361ms for 1111 entries. Jun 25 16:31:26.705773 systemd-journald[1076]: System Journal (/var/log/journal/60e511aa24ed4201925db8201f6b3d55) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:31:26.738881 systemd-journald[1076]: Received client request to flush runtime journal. Jun 25 16:31:26.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.706382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:31:26.709432 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:31:26.713848 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:26.715256 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:31:26.739543 udevadm[1102]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:31:26.716545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:31:26.717923 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:31:26.719320 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:31:26.726272 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:31:26.730057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:26.731493 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:31:26.733893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:31:26.740222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:31:26.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:26.753897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:26.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.218466 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:31:27.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.219000 audit: BPF prog-id=18 op=LOAD Jun 25 16:31:27.219000 audit: BPF prog-id=19 op=LOAD Jun 25 16:31:27.219000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:31:27.219000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:31:27.230161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:27.246898 systemd-udevd[1106]: Using default interface naming scheme 'v252'. Jun 25 16:31:27.262940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:27.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.264000 audit: BPF prog-id=20 op=LOAD Jun 25 16:31:27.271129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:31:27.273000 audit: BPF prog-id=21 op=LOAD Jun 25 16:31:27.273000 audit: BPF prog-id=22 op=LOAD Jun 25 16:31:27.273000 audit: BPF prog-id=23 op=LOAD Jun 25 16:31:27.275417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:31:27.293045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1110) Jun 25 16:31:27.293685 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:31:27.304222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:31:27.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.314002 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1121) Jun 25 16:31:27.335007 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:31:27.351237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:31:27.352008 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:31:27.353287 systemd-networkd[1111]: lo: Link UP Jun 25 16:31:27.353293 systemd-networkd[1111]: lo: Gained carrier Jun 25 16:31:27.353635 systemd-networkd[1111]: Enumeration completed Jun 25 16:31:27.353711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:31:27.353733 systemd-networkd[1111]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:31:27.353736 systemd-networkd[1111]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:31:27.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.355360 systemd-networkd[1111]: eth0: Link UP Jun 25 16:31:27.355365 systemd-networkd[1111]: eth0: Gained carrier Jun 25 16:31:27.355375 systemd-networkd[1111]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:31:27.359106 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:31:27.366045 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jun 25 16:31:27.369997 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:31:27.371140 systemd-networkd[1111]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:31:27.387005 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:31:27.469115 kernel: SVM: TSC scaling supported Jun 25 16:31:27.469214 kernel: kvm: Nested Virtualization enabled Jun 25 16:31:27.469229 kernel: SVM: kvm: Nested Paging enabled Jun 25 16:31:27.470090 kernel: SVM: Virtual VMLOAD VMSAVE supported Jun 25 16:31:27.470113 kernel: SVM: Virtual GIF supported Jun 25 16:31:27.470995 kernel: SVM: LBR virtualization supported Jun 25 16:31:27.500012 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:31:27.534363 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:31:27.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.561187 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:31:27.568096 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:31:27.596004 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:31:27.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.597296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:27.610167 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:31:27.614023 lvm[1146]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:31:27.638834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:31:27.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.640087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:27.641206 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:31:27.641227 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:31:27.642269 systemd[1]: Reached target machines.target - Containers. Jun 25 16:31:27.654155 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:31:27.655470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:27.655538 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:27.657172 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:31:27.659685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:31:27.662709 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:31:27.665314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:31:27.667156 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1148 (bootctl) Jun 25 16:31:27.669099 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:31:27.671316 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:31:27.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.678043 kernel: loop0: detected capacity change from 0 to 80584 Jun 25 16:31:27.695005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:31:27.716924 systemd-fsck[1156]: fsck.fat 4.2 (2021-01-31) Jun 25 16:31:27.716924 systemd-fsck[1156]: /dev/vda1: 809 files, 120401/258078 clusters Jun 25 16:31:27.719225 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:31:27.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.723049 kernel: loop1: detected capacity change from 0 to 209816 Jun 25 16:31:27.726178 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:31:27.973164 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:31:27.992973 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:31:27.993628 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:31:27.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:27.995036 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:31:27.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.005701 kernel: loop2: detected capacity change from 0 to 139360 Jun 25 16:31:28.042008 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:31:28.049001 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 16:31:28.057003 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:31:28.061926 (sd-sysext)[1162]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 16:31:28.063014 (sd-sysext)[1162]: Merged extensions into '/usr'. Jun 25 16:31:28.064404 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:31:28.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.073153 systemd[1]: Starting ensure-sysext.service... Jun 25 16:31:28.075528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:31:28.087160 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:31:28.088584 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:31:28.089260 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:31:28.091250 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:31:28.092431 systemd[1]: Reloading. Jun 25 16:31:28.124141 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:31:28.209239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:31:28.274000 audit: BPF prog-id=24 op=LOAD Jun 25 16:31:28.274000 audit: BPF prog-id=25 op=LOAD Jun 25 16:31:28.274000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:31:28.274000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:31:28.275000 audit: BPF prog-id=26 op=LOAD Jun 25 16:31:28.275000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:31:28.276000 audit: BPF prog-id=27 op=LOAD Jun 25 16:31:28.276000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:31:28.276000 audit: BPF prog-id=28 op=LOAD Jun 25 16:31:28.277000 audit: BPF prog-id=29 op=LOAD Jun 25 16:31:28.277000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:31:28.277000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:31:28.279000 audit: BPF prog-id=30 op=LOAD Jun 25 16:31:28.279000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:31:28.279000 audit: BPF prog-id=31 op=LOAD Jun 25 16:31:28.279000 audit: BPF prog-id=32 op=LOAD Jun 25 16:31:28.279000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:31:28.279000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:31:28.283183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:31:28.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.285976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:28.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.290108 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:31:28.292929 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:31:28.295398 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:31:28.296000 audit: BPF prog-id=33 op=LOAD Jun 25 16:31:28.298000 audit: BPF prog-id=34 op=LOAD Jun 25 16:31:28.298051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:31:28.300748 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:31:28.302911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:31:28.307427 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.307622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:31:28.308888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:28.311733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:28.313962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:28.314000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.315266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:28.315424 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:28.315524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.316368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:28.316477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:28.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.318218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:28.318317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:28.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.320901 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:28.321085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:28.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.325863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.326162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:31:28.334407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:28.337272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:28.340086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:28.341450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:28.341669 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:28.342171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.343850 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:31:28.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.345891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:28.346097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:28.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.348207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:28.348348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:28.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:28.350450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:28.350631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:28.351054 augenrules[1247]: No rules Jun 25 16:31:28.349000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:31:28.349000 audit[1247]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca3a7df60 a2=420 a3=0 items=0 ppid=1221 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:28.349000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:31:28.352497 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:31:28.354480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:31:28.356223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:31:28.361435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.361730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:31:28.372458 systemd-resolved[1230]: Positive Trust Anchors: Jun 25 16:31:28.372474 systemd-resolved[1230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:31:28.372513 systemd-resolved[1230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:31:28.373428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:28.376647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:31:28.376747 systemd-resolved[1230]: Defaulting to hostname 'linux'. Jun 25 16:31:28.379576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:28.382171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:28.383354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:28.383507 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:28.384924 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:31:28.386076 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:31:28.386214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:28.938266 systemd-timesyncd[1231]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 16:31:28.938325 systemd-timesyncd[1231]: Initial clock synchronization to Tue 2024-06-25 16:31:28.938191 UTC. Jun 25 16:31:28.938372 systemd-resolved[1230]: Clock change detected. Flushing caches. Jun 25 16:31:28.938927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:31:28.940458 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:31:28.942177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:28.942297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:28.943859 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:31:28.943989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:31:28.945520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:28.945650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:28.947085 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:28.947197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:28.948660 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:31:28.950425 systemd[1]: Reached target network.target - Network. Jun 25 16:31:28.951470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:28.952602 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:31:28.953734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:31:28.953784 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:31:28.955269 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:31:28.956792 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:31:28.958212 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:31:28.959629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:31:28.961006 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:31:28.962221 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:31:28.962252 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:31:28.963208 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:31:28.964644 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:31:28.967204 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:31:28.975624 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:31:28.976800 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:28.976862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:31:28.977459 systemd[1]: Finished ensure-sysext.service. Jun 25 16:31:28.978448 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:31:28.980549 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:31:28.981648 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:31:28.982651 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:31:28.982675 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:31:28.984156 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:31:28.986812 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:31:28.989482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:31:28.992129 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:31:28.993438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:31:28.994609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:31:28.995275 jq[1262]: false Jun 25 16:31:28.996928 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:31:29.000120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:31:29.004109 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:31:29.005409 dbus-daemon[1261]: [system] SELinux support is enabled Jun 25 16:31:29.007841 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:31:29.009047 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:29.009105 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:31:29.009158 extend-filesystems[1263]: Found loop3 Jun 25 16:31:29.009158 extend-filesystems[1263]: Found loop4 Jun 25 16:31:29.009158 extend-filesystems[1263]: Found loop5 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found sr0 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda1 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda2 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda3 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found usr Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda4 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda6 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda7 Jun 25 16:31:29.011861 extend-filesystems[1263]: Found vda9 Jun 25 16:31:29.011861 extend-filesystems[1263]: Checking size of /dev/vda9 Jun 25 16:31:29.011875 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:31:29.015150 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:31:29.023870 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:31:29.026628 update_engine[1280]: I0625 16:31:29.025254 1280 main.cc:92] Flatcar Update Engine starting Jun 25 16:31:29.026628 update_engine[1280]: I0625 16:31:29.026337 1280 update_check_scheduler.cc:74] Next update check in 2m55s Jun 25 16:31:29.025531 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:31:29.029330 extend-filesystems[1263]: Resized partition /dev/vda9 Jun 25 16:31:29.032726 extend-filesystems[1285]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:31:29.039237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1112) Jun 25 16:31:29.039363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:31:29.039550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:31:29.039919 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:31:29.040063 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:31:29.044361 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:31:29.046657 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 16:31:29.044656 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:31:29.049439 jq[1283]: true Jun 25 16:31:29.064380 tar[1287]: linux-amd64/helm Jun 25 16:31:29.060342 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:31:29.063733 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:31:29.064641 jq[1290]: true Jun 25 16:31:29.063773 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:31:29.065531 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:31:29.065547 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:31:29.067256 systemd-logind[1274]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:31:29.067278 systemd-logind[1274]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:31:29.067535 systemd-logind[1274]: New seat seat0. Jun 25 16:31:29.068507 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:31:29.074235 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:31:29.100497 locksmithd[1296]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:31:29.107769 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 16:31:29.129415 extend-filesystems[1285]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:31:29.129415 extend-filesystems[1285]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:31:29.129415 extend-filesystems[1285]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 16:31:29.134933 extend-filesystems[1263]: Resized filesystem in /dev/vda9 Jun 25 16:31:29.133664 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:31:29.133832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:31:29.138050 bash[1306]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:31:29.138900 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:31:29.140825 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:31:29.231305 containerd[1288]: time="2024-06-25T16:31:29.231211609Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:31:29.257255 containerd[1288]: time="2024-06-25T16:31:29.257195339Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:31:29.257255 containerd[1288]: time="2024-06-25T16:31:29.257253177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.258705 containerd[1288]: time="2024-06-25T16:31:29.258650086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:31:29.258705 containerd[1288]: time="2024-06-25T16:31:29.258700891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.258996 containerd[1288]: time="2024-06-25T16:31:29.258969475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:31:29.258996 containerd[1288]: time="2024-06-25T16:31:29.258990685Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:31:29.259086 containerd[1288]: time="2024-06-25T16:31:29.259058402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259182 containerd[1288]: time="2024-06-25T16:31:29.259148961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259182 containerd[1288]: time="2024-06-25T16:31:29.259175040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259247 containerd[1288]: time="2024-06-25T16:31:29.259238569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259444 containerd[1288]: time="2024-06-25T16:31:29.259423055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259469 containerd[1288]: time="2024-06-25T16:31:29.259446339Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:31:29.259469 containerd[1288]: time="2024-06-25T16:31:29.259456037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259574 containerd[1288]: time="2024-06-25T16:31:29.259552919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:31:29.259574 containerd[1288]: time="2024-06-25T16:31:29.259570973Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:31:29.259634 containerd[1288]: time="2024-06-25T16:31:29.259615516Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:31:29.259634 containerd[1288]: time="2024-06-25T16:31:29.259632338Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:31:29.265764 containerd[1288]: time="2024-06-25T16:31:29.265725811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:31:29.265815 containerd[1288]: time="2024-06-25T16:31:29.265781275Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:31:29.265815 containerd[1288]: time="2024-06-25T16:31:29.265803486Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:31:29.265852 containerd[1288]: time="2024-06-25T16:31:29.265837280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:31:29.265881 containerd[1288]: time="2024-06-25T16:31:29.265856896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:31:29.265881 containerd[1288]: time="2024-06-25T16:31:29.265877645Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:31:29.265917 containerd[1288]: time="2024-06-25T16:31:29.265889638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:31:29.266051 containerd[1288]: time="2024-06-25T16:31:29.266025412Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:31:29.266080 containerd[1288]: time="2024-06-25T16:31:29.266059386Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:31:29.266101 containerd[1288]: time="2024-06-25T16:31:29.266079424Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:31:29.266120 containerd[1288]: time="2024-06-25T16:31:29.266097978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:31:29.266155 containerd[1288]: time="2024-06-25T16:31:29.266120320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266155 containerd[1288]: time="2024-06-25T16:31:29.266143824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266196 containerd[1288]: time="2024-06-25T16:31:29.266162309Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266196 containerd[1288]: time="2024-06-25T16:31:29.266180784Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266234 containerd[1288]: time="2024-06-25T16:31:29.266198797Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266234 containerd[1288]: time="2024-06-25T16:31:29.266216380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266272 containerd[1288]: time="2024-06-25T16:31:29.266233623Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266272 containerd[1288]: time="2024-06-25T16:31:29.266251095Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:31:29.266397 containerd[1288]: time="2024-06-25T16:31:29.266373324Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:31:29.266768 containerd[1288]: time="2024-06-25T16:31:29.266716588Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:31:29.266814 containerd[1288]: time="2024-06-25T16:31:29.266787050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266814 containerd[1288]: time="2024-06-25T16:31:29.266803100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:31:29.266881 containerd[1288]: time="2024-06-25T16:31:29.266831604Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:31:29.266911 containerd[1288]: time="2024-06-25T16:31:29.266894862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266911 containerd[1288]: time="2024-06-25T16:31:29.266906935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266971 containerd[1288]: time="2024-06-25T16:31:29.266918517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266971 containerd[1288]: time="2024-06-25T16:31:29.266929277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266971 containerd[1288]: time="2024-06-25T16:31:29.266941409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266971 containerd[1288]: time="2024-06-25T16:31:29.266952991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.266971 containerd[1288]: time="2024-06-25T16:31:29.266964342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267112 containerd[1288]: time="2024-06-25T16:31:29.266978208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267112 containerd[1288]: time="2024-06-25T16:31:29.266995872Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:31:29.267170 containerd[1288]: time="2024-06-25T16:31:29.267117810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267170 containerd[1288]: time="2024-06-25T16:31:29.267133750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267170 containerd[1288]: time="2024-06-25T16:31:29.267144711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267170 containerd[1288]: time="2024-06-25T16:31:29.267155902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267170 containerd[1288]: time="2024-06-25T16:31:29.267168255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267320 containerd[1288]: time="2024-06-25T16:31:29.267181540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267320 containerd[1288]: time="2024-06-25T16:31:29.267192370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267320 containerd[1288]: time="2024-06-25T16:31:29.267202188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:31:29.267535 containerd[1288]: time="2024-06-25T16:31:29.267475621Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:31:29.267685 containerd[1288]: time="2024-06-25T16:31:29.267545702Z" level=info msg="Connect containerd service" Jun 25 16:31:29.267685 containerd[1288]: time="2024-06-25T16:31:29.267577632Z" level=info msg="using legacy CRI server" Jun 25 16:31:29.267685 containerd[1288]: time="2024-06-25T16:31:29.267586158Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:31:29.267685 containerd[1288]: time="2024-06-25T16:31:29.267614090Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:31:29.268717 containerd[1288]: time="2024-06-25T16:31:29.268684958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:31:29.269329 containerd[1288]: time="2024-06-25T16:31:29.269285955Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:31:29.269368 containerd[1288]: time="2024-06-25T16:31:29.269342831Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:31:29.269397 containerd[1288]: time="2024-06-25T16:31:29.269370764Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:31:29.269416 containerd[1288]: time="2024-06-25T16:31:29.269393436Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:31:29.269786 containerd[1288]: time="2024-06-25T16:31:29.269728634Z" level=info msg="Start subscribing containerd event" Jun 25 16:31:29.269847 containerd[1288]: time="2024-06-25T16:31:29.269815928Z" level=info msg="Start recovering state" Jun 25 16:31:29.269960 containerd[1288]: time="2024-06-25T16:31:29.269825526Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:31:29.270119 containerd[1288]: time="2024-06-25T16:31:29.270094781Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:31:29.270587 containerd[1288]: time="2024-06-25T16:31:29.270569581Z" level=info msg="Start event monitor" Jun 25 16:31:29.270613 containerd[1288]: time="2024-06-25T16:31:29.270598906Z" level=info msg="Start snapshots syncer" Jun 25 16:31:29.270613 containerd[1288]: time="2024-06-25T16:31:29.270609656Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:31:29.270655 containerd[1288]: time="2024-06-25T16:31:29.270617962Z" level=info msg="Start streaming server" Jun 25 16:31:29.270712 containerd[1288]: time="2024-06-25T16:31:29.270693463Z" level=info msg="containerd successfully booted in 0.040263s" Jun 25 16:31:29.270804 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:31:29.274996 systemd-networkd[1111]: eth0: Gained IPv6LL Jun 25 16:31:29.276989 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:31:29.278689 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:31:29.286141 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 16:31:29.289602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:31:29.292358 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:31:29.301540 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:31:29.301722 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 16:31:29.303345 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:31:29.306625 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:31:29.396471 sshd_keygen[1278]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:31:29.417602 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:31:29.427106 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:31:29.432562 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:31:29.432790 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:31:29.435638 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:31:29.454068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:31:29.461459 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:31:29.514420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:31:29.516010 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:31:29.551895 tar[1287]: linux-amd64/LICENSE Jun 25 16:31:29.552045 tar[1287]: linux-amd64/README.md Jun 25 16:31:29.568999 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:31:29.910793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:31:29.912535 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:31:29.914895 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:31:29.920989 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:31:29.921166 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:31:29.922776 systemd[1]: Startup finished in 1.236s (kernel) + 5.896s (initrd) + 3.619s (userspace) = 10.752s. Jun 25 16:31:30.400783 kubelet[1349]: E0625 16:31:30.400688 1349 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:31:30.403649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:31:30.403824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:31:33.989739 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:31:33.991348 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:53012.service - OpenSSH per-connection server daemon (10.0.0.1:53012). Jun 25 16:31:34.028509 sshd[1359]: Accepted publickey for core from 10.0.0.1 port 53012 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.030463 sshd[1359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.039915 systemd-logind[1274]: New session 1 of user core. Jun 25 16:31:34.041223 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:31:34.055315 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:31:34.065518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:31:34.067318 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:31:34.070621 (systemd)[1362]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.158431 systemd[1362]: Queued start job for default target default.target. Jun 25 16:31:34.168359 systemd[1362]: Reached target paths.target - Paths. Jun 25 16:31:34.168385 systemd[1362]: Reached target sockets.target - Sockets. Jun 25 16:31:34.168400 systemd[1362]: Reached target timers.target - Timers. Jun 25 16:31:34.168413 systemd[1362]: Reached target basic.target - Basic System. Jun 25 16:31:34.168469 systemd[1362]: Reached target default.target - Main User Target. Jun 25 16:31:34.168500 systemd[1362]: Startup finished in 92ms. Jun 25 16:31:34.168668 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:31:34.170141 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:31:34.232073 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:53020.service - OpenSSH per-connection server daemon (10.0.0.1:53020). Jun 25 16:31:34.262962 sshd[1371]: Accepted publickey for core from 10.0.0.1 port 53020 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.264138 sshd[1371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.267642 systemd-logind[1274]: New session 2 of user core. Jun 25 16:31:34.276962 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:31:34.331082 sshd[1371]: pam_unix(sshd:session): session closed for user core Jun 25 16:31:34.340757 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:53020.service: Deactivated successfully. Jun 25 16:31:34.341302 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:31:34.341720 systemd-logind[1274]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:31:34.342891 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:53024.service - OpenSSH per-connection server daemon (10.0.0.1:53024). Jun 25 16:31:34.343548 systemd-logind[1274]: Removed session 2. Jun 25 16:31:34.368985 sshd[1377]: Accepted publickey for core from 10.0.0.1 port 53024 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.369957 sshd[1377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.372902 systemd-logind[1274]: New session 3 of user core. Jun 25 16:31:34.378874 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:31:34.428767 sshd[1377]: pam_unix(sshd:session): session closed for user core Jun 25 16:31:34.438896 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:53024.service: Deactivated successfully. Jun 25 16:31:34.439396 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:31:34.439915 systemd-logind[1274]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:31:34.441143 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Jun 25 16:31:34.441870 systemd-logind[1274]: Removed session 3. Jun 25 16:31:34.467562 sshd[1383]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.469068 sshd[1383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.472620 systemd-logind[1274]: New session 4 of user core. Jun 25 16:31:34.478911 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:31:34.533277 sshd[1383]: pam_unix(sshd:session): session closed for user core Jun 25 16:31:34.549225 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:53038.service: Deactivated successfully. Jun 25 16:31:34.549767 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:31:34.550357 systemd-logind[1274]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:31:34.551703 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:53042.service - OpenSSH per-connection server daemon (10.0.0.1:53042). Jun 25 16:31:34.552377 systemd-logind[1274]: Removed session 4. Jun 25 16:31:34.580331 sshd[1389]: Accepted publickey for core from 10.0.0.1 port 53042 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.581564 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.585093 systemd-logind[1274]: New session 5 of user core. Jun 25 16:31:34.601107 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:31:34.659370 sudo[1392]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:31:34.659599 sudo[1392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:31:34.673540 sudo[1392]: pam_unix(sudo:session): session closed for user root Jun 25 16:31:34.675235 sshd[1389]: pam_unix(sshd:session): session closed for user core Jun 25 16:31:34.693378 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:53042.service: Deactivated successfully. Jun 25 16:31:34.694159 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:31:34.694797 systemd-logind[1274]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:31:34.696568 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). Jun 25 16:31:34.697487 systemd-logind[1274]: Removed session 5. Jun 25 16:31:34.726514 sshd[1396]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.727963 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.732378 systemd-logind[1274]: New session 6 of user core. Jun 25 16:31:34.738961 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:31:34.797118 sudo[1400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:31:34.797437 sudo[1400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:31:34.800822 sudo[1400]: pam_unix(sudo:session): session closed for user root Jun 25 16:31:34.805259 sudo[1399]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:31:34.805472 sudo[1399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:31:34.823014 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:31:34.823000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:31:34.824286 auditctl[1403]: No rules Jun 25 16:31:34.824625 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:31:34.824761 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:31:34.826086 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:31:34.843795 augenrules[1420]: No rules Jun 25 16:31:34.844343 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:31:34.862866 kernel: kauditd_printk_skb: 145 callbacks suppressed Jun 25 16:31:34.862921 kernel: audit: type=1305 audit(1719333094.823:186): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:31:34.863150 sudo[1399]: pam_unix(sudo:session): session closed for user root Jun 25 16:31:34.864845 sshd[1396]: pam_unix(sshd:session): session closed for user core Jun 25 16:31:34.823000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeeb27c6f0 a2=420 a3=0 items=0 ppid=1 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:34.868386 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:53052.service: Deactivated successfully. Jun 25 16:31:34.868875 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:31:34.869397 systemd-logind[1274]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:31:34.870695 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:53056.service - OpenSSH per-connection server daemon (10.0.0.1:53056). Jun 25 16:31:34.871578 systemd-logind[1274]: Removed session 6. Jun 25 16:31:34.931492 kernel: audit: type=1300 audit(1719333094.823:186): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeeb27c6f0 a2=420 a3=0 items=0 ppid=1 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:34.931597 kernel: audit: type=1327 audit(1719333094.823:186): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:31:34.823000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:31:34.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.932764 kernel: audit: type=1131 audit(1719333094.824:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.939697 kernel: audit: type=1130 audit(1719333094.843:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.862000 audit[1399]: USER_END pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.862000 audit[1399]: CRED_DISP pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.946913 kernel: audit: type=1106 audit(1719333094.862:189): pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.946952 kernel: audit: type=1104 audit(1719333094.862:190): pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.946968 kernel: audit: type=1106 audit(1719333094.865:191): pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.865000 audit[1396]: USER_END pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.948733 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 53056 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:31:34.950261 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:31:34.865000 audit[1396]: CRED_DISP pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.953742 kernel: audit: type=1104 audit(1719333094.865:192): pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.953803 kernel: audit: type=1131 audit(1719333094.867:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:53052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:53052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.954359 systemd-logind[1274]: New session 7 of user core. Jun 25 16:31:34.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:53056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:34.947000 audit[1426]: USER_ACCT pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.948000 audit[1426]: CRED_ACQ pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.949000 audit[1426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc06e672a0 a2=3 a3=7f5acb84d480 items=0 ppid=1 pid=1426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:34.949000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:31:34.961989 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:31:34.965000 audit[1426]: USER_START pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:34.967000 audit[1428]: CRED_ACQ pid=1428 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:31:35.015000 audit[1429]: USER_ACCT pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:35.015000 audit[1429]: CRED_REFR pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:35.015881 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:31:35.016084 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:31:35.016000 audit[1429]: USER_START pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:31:35.107047 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:31:35.339467 dockerd[1440]: time="2024-06-25T16:31:35.339393007Z" level=info msg="Starting up" Jun 25 16:31:38.459631 dockerd[1440]: time="2024-06-25T16:31:38.459561679Z" level=info msg="Loading containers: start." Jun 25 16:31:38.527000 audit[1475]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.527000 audit[1475]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff79c028e0 a2=0 a3=7ff7f3485e90 items=0 ppid=1440 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.527000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:31:38.530000 audit[1477]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.530000 audit[1477]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe1014e3b0 a2=0 a3=7f5b94245e90 items=0 ppid=1440 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.530000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:31:38.532000 audit[1479]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.532000 audit[1479]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9b463950 a2=0 a3=7fb2e8b85e90 items=0 ppid=1440 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.532000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:31:38.534000 audit[1481]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.534000 audit[1481]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdce6ebfd0 a2=0 a3=7f72dc4c9e90 items=0 ppid=1440 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.534000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:31:38.537000 audit[1483]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.537000 audit[1483]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd7ff7af10 a2=0 a3=7fd193986e90 items=0 ppid=1440 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.537000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:31:38.539000 audit[1485]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.539000 audit[1485]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffee5016600 a2=0 a3=7f6d6e2fce90 items=0 ppid=1440 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.539000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:31:38.712000 audit[1487]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.712000 audit[1487]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed85ddfd0 a2=0 a3=7f39a335be90 items=0 ppid=1440 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.712000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:31:38.713000 audit[1489]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.713000 audit[1489]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff600f3cc0 a2=0 a3=7f65170f5e90 items=0 ppid=1440 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.713000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:31:38.715000 audit[1491]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:38.715000 audit[1491]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffddfd92fb0 a2=0 a3=7fb7ede5be90 items=0 ppid=1440 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:38.715000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:31:39.005000 audit[1495]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.005000 audit[1495]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd4b177440 a2=0 a3=7f1fa5969e90 items=0 ppid=1440 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.005000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:31:39.006000 audit[1496]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.006000 audit[1496]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe6d2ba490 a2=0 a3=7fc159f13e90 items=0 ppid=1440 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.006000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:31:39.013769 kernel: Initializing XFRM netlink socket Jun 25 16:31:39.042000 audit[1505]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.042000 audit[1505]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd05786db0 a2=0 a3=7ffb10a10e90 items=0 ppid=1440 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.042000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:31:39.053000 audit[1508]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.053000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc134a8dd0 a2=0 a3=7ffa3466ce90 items=0 ppid=1440 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.053000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:31:39.057000 audit[1512]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.057000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffff8411b70 a2=0 a3=7f33c2d05e90 items=0 ppid=1440 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.057000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:31:39.058000 audit[1514]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.058000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffbaa84a00 a2=0 a3=7ff07b2ece90 items=0 ppid=1440 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.058000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:31:39.060000 audit[1516]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.060000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff50ad8990 a2=0 a3=7fbedd020e90 items=0 ppid=1440 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.060000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:31:39.062000 audit[1518]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.062000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff3f808740 a2=0 a3=7fa9ad4dee90 items=0 ppid=1440 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.062000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:31:39.063000 audit[1520]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.063000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcb8def690 a2=0 a3=7fb75c36ee90 items=0 ppid=1440 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:31:39.069000 audit[1523]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.069000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd3e2d38c0 a2=0 a3=7fea5ba09e90 items=0 ppid=1440 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.069000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:31:39.071000 audit[1525]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.071000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdcb42d350 a2=0 a3=7f6e352a9e90 items=0 ppid=1440 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.071000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:31:39.072000 audit[1527]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.072000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdd04e94f0 a2=0 a3=7ff7479bbe90 items=0 ppid=1440 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.072000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:31:39.074000 audit[1529]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.074000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffedaf7a2e0 a2=0 a3=7f83d1a3fe90 items=0 ppid=1440 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.074000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:31:39.075564 systemd-networkd[1111]: docker0: Link UP Jun 25 16:31:39.274000 audit[1533]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.274000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda2a8e070 a2=0 a3=7fe110fade90 items=0 ppid=1440 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:31:39.275000 audit[1534]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:31:39.275000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe143234e0 a2=0 a3=7f7df7192e90 items=0 ppid=1440 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:39.275000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:31:39.276653 dockerd[1440]: time="2024-06-25T16:31:39.276615704Z" level=info msg="Loading containers: done." Jun 25 16:31:39.351120 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2131744882-merged.mount: Deactivated successfully. Jun 25 16:31:39.490503 dockerd[1440]: time="2024-06-25T16:31:39.490423237Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:31:39.490906 dockerd[1440]: time="2024-06-25T16:31:39.490700126Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:31:39.490906 dockerd[1440]: time="2024-06-25T16:31:39.490841400Z" level=info msg="Daemon has completed initialization" Jun 25 16:31:39.841941 dockerd[1440]: time="2024-06-25T16:31:39.841870374Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:31:39.842061 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:31:39.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:39.843206 kernel: kauditd_printk_skb: 83 callbacks suppressed Jun 25 16:31:39.843279 kernel: audit: type=1130 audit(1719333099.841:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.477292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:31:40.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.477478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:31:40.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.482811 kernel: audit: type=1130 audit(1719333100.476:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.482851 kernel: audit: type=1131 audit(1719333100.476:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.495290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:31:40.592534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:31:40.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.596775 kernel: audit: type=1130 audit(1719333100.592:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:40.709428 kubelet[1579]: E0625 16:31:40.709361 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:31:40.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:31:40.713496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:31:40.713647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:31:40.717783 kernel: audit: type=1131 audit(1719333100.713:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:31:40.812799 containerd[1288]: time="2024-06-25T16:31:40.812669466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:31:42.451720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873968051.mount: Deactivated successfully. Jun 25 16:31:43.881310 containerd[1288]: time="2024-06-25T16:31:43.881241022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:43.882231 containerd[1288]: time="2024-06-25T16:31:43.882140919Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:31:43.883957 containerd[1288]: time="2024-06-25T16:31:43.883920986Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:43.885821 containerd[1288]: time="2024-06-25T16:31:43.885787695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:43.888375 containerd[1288]: time="2024-06-25T16:31:43.888286640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:43.889699 containerd[1288]: time="2024-06-25T16:31:43.889644104Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.076929624s" Jun 25 16:31:43.889781 containerd[1288]: time="2024-06-25T16:31:43.889707944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:31:43.916408 containerd[1288]: time="2024-06-25T16:31:43.916357613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:31:45.827945 containerd[1288]: time="2024-06-25T16:31:45.827816075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:45.830070 containerd[1288]: time="2024-06-25T16:31:45.829728370Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:31:45.832143 containerd[1288]: time="2024-06-25T16:31:45.832092942Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:45.835519 containerd[1288]: time="2024-06-25T16:31:45.835410461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:45.842255 containerd[1288]: time="2024-06-25T16:31:45.842169061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:45.843610 containerd[1288]: time="2024-06-25T16:31:45.843522579Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.927098301s" Jun 25 16:31:45.843610 containerd[1288]: time="2024-06-25T16:31:45.843603500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:31:45.871441 containerd[1288]: time="2024-06-25T16:31:45.871381244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:31:46.969650 containerd[1288]: time="2024-06-25T16:31:46.969566960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:46.971166 containerd[1288]: time="2024-06-25T16:31:46.971093281Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:31:46.972953 containerd[1288]: time="2024-06-25T16:31:46.972909386Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:46.975531 containerd[1288]: time="2024-06-25T16:31:46.975475937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:46.978596 containerd[1288]: time="2024-06-25T16:31:46.978519643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:46.980100 containerd[1288]: time="2024-06-25T16:31:46.980010317Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.108559042s" Jun 25 16:31:46.980196 containerd[1288]: time="2024-06-25T16:31:46.980107790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:31:47.015506 containerd[1288]: time="2024-06-25T16:31:47.015440005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:31:48.299789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539966350.mount: Deactivated successfully. Jun 25 16:31:50.510909 containerd[1288]: time="2024-06-25T16:31:50.509352543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:50.586254 containerd[1288]: time="2024-06-25T16:31:50.586161333Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:31:50.644678 containerd[1288]: time="2024-06-25T16:31:50.643842364Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:50.727514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:31:50.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.727719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:31:50.728655 containerd[1288]: time="2024-06-25T16:31:50.728597109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:50.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.769152 kernel: audit: type=1130 audit(1719333110.726:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.769318 kernel: audit: type=1131 audit(1719333110.727:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.774324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:31:50.793847 containerd[1288]: time="2024-06-25T16:31:50.791537690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:50.796725 containerd[1288]: time="2024-06-25T16:31:50.794409754Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 3.778913333s" Jun 25 16:31:50.796725 containerd[1288]: time="2024-06-25T16:31:50.795455144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:31:50.840784 containerd[1288]: time="2024-06-25T16:31:50.840675255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:31:50.910921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:31:50.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.942194 kernel: audit: type=1130 audit(1719333110.910:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:51.571085 kubelet[1692]: E0625 16:31:51.571027 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:31:51.573689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:31:51.573820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:31:51.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:31:51.586912 kernel: audit: type=1131 audit(1719333111.573:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:31:54.505710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682886822.mount: Deactivated successfully. Jun 25 16:31:55.242688 containerd[1288]: time="2024-06-25T16:31:55.241383271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:55.243581 containerd[1288]: time="2024-06-25T16:31:55.243452089Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:31:55.251245 containerd[1288]: time="2024-06-25T16:31:55.251152785Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:55.256603 containerd[1288]: time="2024-06-25T16:31:55.256406154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:55.261462 containerd[1288]: time="2024-06-25T16:31:55.259792592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:55.261462 containerd[1288]: time="2024-06-25T16:31:55.260868248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 4.42009006s" Jun 25 16:31:55.261462 containerd[1288]: time="2024-06-25T16:31:55.260905188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:31:55.302231 containerd[1288]: time="2024-06-25T16:31:55.302163823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:31:56.373889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745283557.mount: Deactivated successfully. Jun 25 16:32:01.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:01.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:01.730367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:32:01.760920 kernel: audit: type=1130 audit(1719333121.729:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:01.760965 kernel: audit: type=1131 audit(1719333121.729:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:01.730601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:01.759498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:01.908246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:01.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:01.916302 kernel: audit: type=1130 audit(1719333121.908:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:02.027430 kubelet[1761]: E0625 16:32:02.026472 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:32:02.036190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:32:02.036399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:32:02.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:32:02.041874 kernel: audit: type=1131 audit(1719333122.035:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:32:03.454777 containerd[1288]: time="2024-06-25T16:32:03.454674931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:03.457152 containerd[1288]: time="2024-06-25T16:32:03.457033029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:32:03.462205 containerd[1288]: time="2024-06-25T16:32:03.462062652Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:03.467428 containerd[1288]: time="2024-06-25T16:32:03.467297910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:03.473329 containerd[1288]: time="2024-06-25T16:32:03.473244101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:03.474799 containerd[1288]: time="2024-06-25T16:32:03.474728194Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 8.172242808s" Jun 25 16:32:03.474799 containerd[1288]: time="2024-06-25T16:32:03.474788781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:32:03.565962 containerd[1288]: time="2024-06-25T16:32:03.565636107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:32:04.405157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655757759.mount: Deactivated successfully. Jun 25 16:32:05.832470 containerd[1288]: time="2024-06-25T16:32:05.832381174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:05.837110 containerd[1288]: time="2024-06-25T16:32:05.837025312Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:32:05.847351 containerd[1288]: time="2024-06-25T16:32:05.847205733Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:05.868284 containerd[1288]: time="2024-06-25T16:32:05.868196878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:05.946922 containerd[1288]: time="2024-06-25T16:32:05.946850925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:05.948483 containerd[1288]: time="2024-06-25T16:32:05.948398881Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.382697247s" Jun 25 16:32:05.948483 containerd[1288]: time="2024-06-25T16:32:05.948465960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:32:08.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:08.891455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:08.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:08.898797 kernel: audit: type=1130 audit(1719333128.890:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:08.898863 kernel: audit: type=1131 audit(1719333128.890:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:08.907061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:08.937294 systemd[1]: Reloading. Jun 25 16:32:09.246445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:32:09.346650 kernel: audit: type=1334 audit(1719333129.335:242): prog-id=38 op=LOAD Jun 25 16:32:09.346948 kernel: audit: type=1334 audit(1719333129.336:243): prog-id=35 op=UNLOAD Jun 25 16:32:09.346984 kernel: audit: type=1334 audit(1719333129.337:244): prog-id=39 op=LOAD Jun 25 16:32:09.347009 kernel: audit: type=1334 audit(1719333129.337:245): prog-id=40 op=LOAD Jun 25 16:32:09.347032 kernel: audit: type=1334 audit(1719333129.337:246): prog-id=36 op=UNLOAD Jun 25 16:32:09.347053 kernel: audit: type=1334 audit(1719333129.337:247): prog-id=37 op=UNLOAD Jun 25 16:32:09.347075 kernel: audit: type=1334 audit(1719333129.337:248): prog-id=41 op=LOAD Jun 25 16:32:09.347094 kernel: audit: type=1334 audit(1719333129.337:249): prog-id=42 op=LOAD Jun 25 16:32:09.335000 audit: BPF prog-id=38 op=LOAD Jun 25 16:32:09.336000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:32:09.337000 audit: BPF prog-id=39 op=LOAD Jun 25 16:32:09.337000 audit: BPF prog-id=40 op=LOAD Jun 25 16:32:09.337000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:32:09.337000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:32:09.337000 audit: BPF prog-id=41 op=LOAD Jun 25 16:32:09.337000 audit: BPF prog-id=42 op=LOAD Jun 25 16:32:09.337000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:32:09.337000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:32:09.339000 audit: BPF prog-id=43 op=LOAD Jun 25 16:32:09.339000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:32:09.342000 audit: BPF prog-id=44 op=LOAD Jun 25 16:32:09.342000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:32:09.342000 audit: BPF prog-id=45 op=LOAD Jun 25 16:32:09.342000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:32:09.342000 audit: BPF prog-id=46 op=LOAD Jun 25 16:32:09.342000 audit: BPF prog-id=47 op=LOAD Jun 25 16:32:09.343000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:32:09.343000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:32:09.346000 audit: BPF prog-id=48 op=LOAD Jun 25 16:32:09.346000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:32:09.347000 audit: BPF prog-id=49 op=LOAD Jun 25 16:32:09.347000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:32:09.347000 audit: BPF prog-id=50 op=LOAD Jun 25 16:32:09.347000 audit: BPF prog-id=51 op=LOAD Jun 25 16:32:09.347000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:32:09.347000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:32:09.369405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:09.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:09.371432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:09.372092 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:32:09.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:09.372314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:09.374732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:09.485024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:09.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:09.658204 kubelet[1928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:32:09.658204 kubelet[1928]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:32:09.658204 kubelet[1928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:32:09.658646 kubelet[1928]: I0625 16:32:09.658198 1928 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:32:10.152467 kubelet[1928]: I0625 16:32:10.152394 1928 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:32:10.152467 kubelet[1928]: I0625 16:32:10.152438 1928 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:32:10.152760 kubelet[1928]: I0625 16:32:10.152719 1928 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:32:10.200976 kubelet[1928]: E0625 16:32:10.199118 1928 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.200976 kubelet[1928]: I0625 16:32:10.200762 1928 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:32:10.260701 kubelet[1928]: I0625 16:32:10.260201 1928 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:32:10.264167 kubelet[1928]: I0625 16:32:10.263472 1928 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:32:10.264167 kubelet[1928]: I0625 16:32:10.263704 1928 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:32:10.264167 kubelet[1928]: I0625 16:32:10.263725 1928 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:32:10.264167 kubelet[1928]: I0625 16:32:10.263734 1928 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:32:10.266237 kubelet[1928]: I0625 16:32:10.265960 1928 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:32:10.275270 kubelet[1928]: I0625 16:32:10.275169 1928 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:32:10.275270 kubelet[1928]: I0625 16:32:10.275213 1928 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:32:10.275270 kubelet[1928]: I0625 16:32:10.275240 1928 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:32:10.275270 kubelet[1928]: I0625 16:32:10.275257 1928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:32:10.287050 kubelet[1928]: I0625 16:32:10.287001 1928 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:32:10.287416 kubelet[1928]: W0625 16:32:10.287329 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.287699 kubelet[1928]: E0625 16:32:10.287654 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.289264 kubelet[1928]: W0625 16:32:10.289159 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.289337 kubelet[1928]: E0625 16:32:10.289275 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.295200 kubelet[1928]: W0625 16:32:10.295145 1928 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:32:10.296480 kubelet[1928]: I0625 16:32:10.296458 1928 server.go:1232] "Started kubelet" Jun 25 16:32:10.313303 kubelet[1928]: I0625 16:32:10.313258 1928 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:32:10.313672 kubelet[1928]: E0625 16:32:10.313515 1928 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c5f61e0afe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.149:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.149:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:32:10.313807 kubelet[1928]: I0625 16:32:10.313303 1928 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:32:10.314185 kubelet[1928]: I0625 16:32:10.314154 1928 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:32:10.316409 kubelet[1928]: I0625 16:32:10.315039 1928 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:32:10.316409 kubelet[1928]: E0625 16:32:10.315145 1928 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:32:10.316409 kubelet[1928]: E0625 16:32:10.315176 1928 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:32:10.316409 kubelet[1928]: I0625 16:32:10.315203 1928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:32:10.316409 kubelet[1928]: I0625 16:32:10.315489 1928 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:32:10.316409 kubelet[1928]: E0625 16:32:10.316202 1928 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:32:10.316710 kubelet[1928]: I0625 16:32:10.316428 1928 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:32:10.316710 kubelet[1928]: I0625 16:32:10.316506 1928 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:32:10.317267 kubelet[1928]: W0625 16:32:10.316772 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.317267 kubelet[1928]: E0625 16:32:10.316827 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.319000 audit[1941]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.319000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff58c2b4c0 a2=0 a3=7faf1752ae90 items=0 ppid=1928 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:32:10.321000 audit[1943]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.321000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff37134830 a2=0 a3=7f743ecede90 items=0 ppid=1928 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:32:10.323000 audit[1946]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.323000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff80a3ea20 a2=0 a3=7fcc0df00e90 items=0 ppid=1928 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:32:10.326000 audit[1948]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.326000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc48e52e0 a2=0 a3=7f18856f9e90 items=0 ppid=1928 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:32:10.333797 kubelet[1928]: E0625 16:32:10.333767 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Jun 25 16:32:10.340000 audit[1952]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.340000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe952b9810 a2=0 a3=7f8ba60f9e90 items=0 ppid=1928 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:32:10.343076 kubelet[1928]: I0625 16:32:10.343053 1928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:32:10.342000 audit[1954]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:10.342000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe6c3bcaf0 a2=0 a3=7f672b0cee90 items=0 ppid=1928 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:32:10.344436 kubelet[1928]: I0625 16:32:10.344426 1928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:32:10.344511 kubelet[1928]: I0625 16:32:10.344502 1928 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:32:10.344592 kubelet[1928]: I0625 16:32:10.344573 1928 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:32:10.344717 kubelet[1928]: E0625 16:32:10.344699 1928 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:32:10.346208 kubelet[1928]: W0625 16:32:10.345214 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.346208 kubelet[1928]: E0625 16:32:10.345277 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:10.345000 audit[1956]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:10.345000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6a81a1b0 a2=0 a3=7f403aa61e90 items=0 ppid=1928 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:32:10.346000 audit[1955]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.346000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8db85f90 a2=0 a3=7f1efc795e90 items=0 ppid=1928 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:32:10.347000 audit[1958]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:10.347000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff8bbd24f0 a2=0 a3=7fb613f92e90 items=0 ppid=1928 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:32:10.348000 audit[1959]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.348000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd96d370 a2=0 a3=7f2195696e90 items=0 ppid=1928 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:32:10.349268 kubelet[1928]: I0625 16:32:10.348452 1928 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:32:10.349268 kubelet[1928]: I0625 16:32:10.348467 1928 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:32:10.349268 kubelet[1928]: I0625 16:32:10.348483 1928 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:32:10.349000 audit[1960]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:10.349000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea7424a90 a2=0 a3=7fdd44f23e90 items=0 ppid=1928 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:32:10.349000 audit[1961]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:10.349000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff379e9820 a2=0 a3=7f0dec093e90 items=0 ppid=1928 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:10.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:32:10.418512 kubelet[1928]: I0625 16:32:10.418164 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:10.420533 kubelet[1928]: E0625 16:32:10.420515 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:10.445657 kubelet[1928]: E0625 16:32:10.445484 1928 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:32:10.535047 kubelet[1928]: E0625 16:32:10.534998 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Jun 25 16:32:10.560639 kubelet[1928]: E0625 16:32:10.560037 1928 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c5f61e0afe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.149:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.149:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:32:10.624634 kubelet[1928]: I0625 16:32:10.624510 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:10.625980 kubelet[1928]: E0625 16:32:10.625943 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:10.646520 kubelet[1928]: E0625 16:32:10.646316 1928 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:32:10.935639 kubelet[1928]: E0625 16:32:10.935556 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Jun 25 16:32:11.031688 kubelet[1928]: I0625 16:32:11.031287 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:11.034020 kubelet[1928]: E0625 16:32:11.033962 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:11.047375 kubelet[1928]: E0625 16:32:11.047224 1928 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:32:11.274252 kubelet[1928]: W0625 16:32:11.273846 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.274783 kubelet[1928]: E0625 16:32:11.274582 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.509671 kubelet[1928]: W0625 16:32:11.509488 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.509671 kubelet[1928]: E0625 16:32:11.509559 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.671175 kubelet[1928]: W0625 16:32:11.670266 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.671175 kubelet[1928]: E0625 16:32:11.670800 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.721724 kubelet[1928]: W0625 16:32:11.720940 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.721724 kubelet[1928]: E0625 16:32:11.721008 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:11.736463 kubelet[1928]: E0625 16:32:11.736341 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Jun 25 16:32:11.768028 kubelet[1928]: I0625 16:32:11.767858 1928 policy_none.go:49] "None policy: Start" Jun 25 16:32:11.773862 kubelet[1928]: I0625 16:32:11.773803 1928 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:32:11.773862 kubelet[1928]: I0625 16:32:11.773870 1928 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:32:11.839336 kubelet[1928]: I0625 16:32:11.837074 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:11.844005 kubelet[1928]: E0625 16:32:11.842922 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:11.849343 kubelet[1928]: E0625 16:32:11.849292 1928 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:32:11.918573 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:32:11.960516 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:32:11.983225 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:32:12.002411 kubelet[1928]: I0625 16:32:12.001995 1928 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:32:12.003735 kubelet[1928]: I0625 16:32:12.003497 1928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:32:12.004115 kubelet[1928]: E0625 16:32:12.004103 1928 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:32:12.369898 kubelet[1928]: E0625 16:32:12.366907 1928 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:13.337952 kubelet[1928]: E0625 16:32:13.337879 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="3.2s" Jun 25 16:32:13.445043 kubelet[1928]: I0625 16:32:13.444999 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:13.445503 kubelet[1928]: E0625 16:32:13.445443 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:13.449979 kubelet[1928]: I0625 16:32:13.449906 1928 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:32:13.451332 kubelet[1928]: I0625 16:32:13.451288 1928 topology_manager.go:215] "Topology Admit Handler" podUID="3631274fb2cd218c48d2734d776415f9" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:32:13.453885 kubelet[1928]: I0625 16:32:13.452686 1928 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:32:13.462738 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 16:32:13.487794 systemd[1]: Created slice kubepods-burstable-pod3631274fb2cd218c48d2734d776415f9.slice - libcontainer container kubepods-burstable-pod3631274fb2cd218c48d2734d776415f9.slice. Jun 25 16:32:13.515582 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 16:32:13.546875 kubelet[1928]: I0625 16:32:13.546321 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:13.546875 kubelet[1928]: I0625 16:32:13.546391 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:32:13.546875 kubelet[1928]: I0625 16:32:13.546426 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:13.546875 kubelet[1928]: I0625 16:32:13.546447 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:13.546875 kubelet[1928]: I0625 16:32:13.546470 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:13.547177 kubelet[1928]: I0625 16:32:13.546495 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:13.547177 kubelet[1928]: I0625 16:32:13.546514 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:13.547177 kubelet[1928]: I0625 16:32:13.546535 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:13.547177 kubelet[1928]: I0625 16:32:13.546556 1928 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:13.612481 kubelet[1928]: W0625 16:32:13.612259 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:13.612663 kubelet[1928]: E0625 16:32:13.612366 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:13.787097 kubelet[1928]: E0625 16:32:13.786506 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:13.787534 containerd[1288]: time="2024-06-25T16:32:13.787465541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 16:32:13.809622 kubelet[1928]: E0625 16:32:13.809553 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:13.810260 containerd[1288]: time="2024-06-25T16:32:13.810207782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3631274fb2cd218c48d2734d776415f9,Namespace:kube-system,Attempt:0,}" Jun 25 16:32:13.822957 kubelet[1928]: E0625 16:32:13.822826 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:13.823694 containerd[1288]: time="2024-06-25T16:32:13.823512364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 16:32:14.172798 update_engine[1280]: I0625 16:32:14.172662 1280 update_attempter.cc:509] Updating boot flags... Jun 25 16:32:14.285031 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1968) Jun 25 16:32:14.376809 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1972) Jun 25 16:32:14.426795 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1972) Jun 25 16:32:14.576197 kubelet[1928]: W0625 16:32:14.576117 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:14.576197 kubelet[1928]: E0625 16:32:14.576197 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:14.690877 kubelet[1928]: W0625 16:32:14.690787 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:14.690877 kubelet[1928]: E0625 16:32:14.690874 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:14.700220 kubelet[1928]: W0625 16:32:14.700152 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:14.700220 kubelet[1928]: E0625 16:32:14.700225 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:16.538969 kubelet[1928]: E0625 16:32:16.538896 1928 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="6.4s" Jun 25 16:32:16.620818 kubelet[1928]: E0625 16:32:16.620723 1928 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:16.650671 kubelet[1928]: I0625 16:32:16.650428 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:16.678071 kubelet[1928]: E0625 16:32:16.651022 1928 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 25 16:32:17.435564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167345957.mount: Deactivated successfully. Jun 25 16:32:17.461416 containerd[1288]: time="2024-06-25T16:32:17.460775136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.463060 containerd[1288]: time="2024-06-25T16:32:17.463011317Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.464841 containerd[1288]: time="2024-06-25T16:32:17.464768650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:32:17.466530 containerd[1288]: time="2024-06-25T16:32:17.466447223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:32:17.469331 containerd[1288]: time="2024-06-25T16:32:17.469277100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.470997 containerd[1288]: time="2024-06-25T16:32:17.470866745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:32:17.474068 containerd[1288]: time="2024-06-25T16:32:17.473891152Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.477806 containerd[1288]: time="2024-06-25T16:32:17.477383726Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.483345 containerd[1288]: time="2024-06-25T16:32:17.482810940Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.490358 containerd[1288]: time="2024-06-25T16:32:17.487729148Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.490358 containerd[1288]: time="2024-06-25T16:32:17.489233261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.491510 containerd[1288]: time="2024-06-25T16:32:17.491267510Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.494230 containerd[1288]: time="2024-06-25T16:32:17.494173060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.495914 containerd[1288]: time="2024-06-25T16:32:17.495403023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.671719043s" Jun 25 16:32:17.496535 containerd[1288]: time="2024-06-25T16:32:17.496498701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.498244 containerd[1288]: time="2024-06-25T16:32:17.498188586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.687866166s" Jun 25 16:32:17.499554 containerd[1288]: time="2024-06-25T16:32:17.499517888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.711918822s" Jun 25 16:32:17.500364 containerd[1288]: time="2024-06-25T16:32:17.500321632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:32:17.671514 containerd[1288]: time="2024-06-25T16:32:17.671292017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:17.671514 containerd[1288]: time="2024-06-25T16:32:17.671347131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.671733 containerd[1288]: time="2024-06-25T16:32:17.671496064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:17.671733 containerd[1288]: time="2024-06-25T16:32:17.671514188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.672645 containerd[1288]: time="2024-06-25T16:32:17.672485341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:17.672645 containerd[1288]: time="2024-06-25T16:32:17.672543641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.672645 containerd[1288]: time="2024-06-25T16:32:17.672563989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:17.672645 containerd[1288]: time="2024-06-25T16:32:17.672588186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.674504 containerd[1288]: time="2024-06-25T16:32:17.674388390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:17.674504 containerd[1288]: time="2024-06-25T16:32:17.674443004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.674504 containerd[1288]: time="2024-06-25T16:32:17.674458903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:17.674504 containerd[1288]: time="2024-06-25T16:32:17.674469684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:17.703193 systemd[1]: Started cri-containerd-e0d4a6decac260b0d2058a0b501f17478fe5aba053a2f830cb51ee8131f564d1.scope - libcontainer container e0d4a6decac260b0d2058a0b501f17478fe5aba053a2f830cb51ee8131f564d1. Jun 25 16:32:17.708564 systemd[1]: Started cri-containerd-4ba56bc406df5be97bc72c17e08af22045b0b632b5f4a887eda1f8327549f59b.scope - libcontainer container 4ba56bc406df5be97bc72c17e08af22045b0b632b5f4a887eda1f8327549f59b. Jun 25 16:32:17.709783 systemd[1]: Started cri-containerd-6a5ee653f3c3aa1948a3e2008caf2c3e42bd9060dcec05e40d40634f282c643a.scope - libcontainer container 6a5ee653f3c3aa1948a3e2008caf2c3e42bd9060dcec05e40d40634f282c643a. Jun 25 16:32:17.730797 kernel: kauditd_printk_skb: 59 callbacks suppressed Jun 25 16:32:17.730893 kernel: audit: type=1334 audit(1719333137.726:285): prog-id=52 op=LOAD Jun 25 16:32:17.726000 audit: BPF prog-id=52 op=LOAD Jun 25 16:32:17.735000 audit: BPF prog-id=53 op=LOAD Jun 25 16:32:17.744410 kernel: audit: type=1334 audit(1719333137.735:286): prog-id=53 op=LOAD Jun 25 16:32:17.744447 kernel: audit: type=1300 audit(1719333137.735:286): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2006 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.735000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2006 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.745330 kernel: audit: type=1327 audit(1719333137.735:286): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643461366465636163323630623064323035386130623530316631 Jun 25 16:32:17.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643461366465636163323630623064323035386130623530316631 Jun 25 16:32:17.735000 audit: BPF prog-id=54 op=LOAD Jun 25 16:32:17.754657 kernel: audit: type=1334 audit(1719333137.735:287): prog-id=54 op=LOAD Jun 25 16:32:17.754767 kernel: audit: type=1300 audit(1719333137.735:287): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2006 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.735000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2006 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643461366465636163323630623064323035386130623530316631 Jun 25 16:32:17.735000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:32:17.779307 kernel: audit: type=1327 audit(1719333137.735:287): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643461366465636163323630623064323035386130623530316631 Jun 25 16:32:17.779442 kernel: audit: type=1334 audit(1719333137.735:288): prog-id=54 op=UNLOAD Jun 25 16:32:17.789622 kernel: audit: type=1334 audit(1719333137.735:289): prog-id=53 op=UNLOAD Jun 25 16:32:17.789822 kernel: audit: type=1334 audit(1719333137.735:290): prog-id=55 op=LOAD Jun 25 16:32:17.735000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:32:17.735000 audit: BPF prog-id=55 op=LOAD Jun 25 16:32:17.735000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2006 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643461366465636163323630623064323035386130623530316631 Jun 25 16:32:17.750000 audit: BPF prog-id=56 op=LOAD Jun 25 16:32:17.754000 audit: BPF prog-id=57 op=LOAD Jun 25 16:32:17.754000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2007 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613536626334303664663562653937626337326331376530386166 Jun 25 16:32:17.754000 audit: BPF prog-id=58 op=LOAD Jun 25 16:32:17.754000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2007 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613536626334303664663562653937626337326331376530386166 Jun 25 16:32:17.754000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:32:17.754000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:32:17.754000 audit: BPF prog-id=59 op=LOAD Jun 25 16:32:17.754000 audit: BPF prog-id=60 op=LOAD Jun 25 16:32:17.754000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2007 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613536626334303664663562653937626337326331376530386166 Jun 25 16:32:17.755000 audit: BPF prog-id=61 op=LOAD Jun 25 16:32:17.755000 audit[2037]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2005 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661356565363533663363336161313934386133653230303863616632 Jun 25 16:32:17.755000 audit: BPF prog-id=62 op=LOAD Jun 25 16:32:17.755000 audit[2037]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2005 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661356565363533663363336161313934386133653230303863616632 Jun 25 16:32:17.755000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:32:17.755000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:32:17.755000 audit: BPF prog-id=63 op=LOAD Jun 25 16:32:17.755000 audit[2037]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2005 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:17.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661356565363533663363336161313934386133653230303863616632 Jun 25 16:32:17.807238 containerd[1288]: time="2024-06-25T16:32:17.806028568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0d4a6decac260b0d2058a0b501f17478fe5aba053a2f830cb51ee8131f564d1\"" Jun 25 16:32:17.810593 kubelet[1928]: E0625 16:32:17.810311 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:17.819562 containerd[1288]: time="2024-06-25T16:32:17.817431436Z" level=info msg="CreateContainer within sandbox \"e0d4a6decac260b0d2058a0b501f17478fe5aba053a2f830cb51ee8131f564d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:32:17.821689 containerd[1288]: time="2024-06-25T16:32:17.820250382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3631274fb2cd218c48d2734d776415f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ba56bc406df5be97bc72c17e08af22045b0b632b5f4a887eda1f8327549f59b\"" Jun 25 16:32:17.821785 kubelet[1928]: E0625 16:32:17.820758 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:17.823217 containerd[1288]: time="2024-06-25T16:32:17.823014776Z" level=info msg="CreateContainer within sandbox \"4ba56bc406df5be97bc72c17e08af22045b0b632b5f4a887eda1f8327549f59b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:32:17.830286 containerd[1288]: time="2024-06-25T16:32:17.828991702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a5ee653f3c3aa1948a3e2008caf2c3e42bd9060dcec05e40d40634f282c643a\"" Jun 25 16:32:17.830358 kubelet[1928]: E0625 16:32:17.829826 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:17.832694 containerd[1288]: time="2024-06-25T16:32:17.832630934Z" level=info msg="CreateContainer within sandbox \"6a5ee653f3c3aa1948a3e2008caf2c3e42bd9060dcec05e40d40634f282c643a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:32:17.941290 kubelet[1928]: W0625 16:32:17.940996 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:17.941290 kubelet[1928]: E0625 16:32:17.941058 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:18.437728 containerd[1288]: time="2024-06-25T16:32:18.437647078Z" level=info msg="CreateContainer within sandbox \"e0d4a6decac260b0d2058a0b501f17478fe5aba053a2f830cb51ee8131f564d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d\"" Jun 25 16:32:18.440286 containerd[1288]: time="2024-06-25T16:32:18.438640612Z" level=info msg="StartContainer for \"3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d\"" Jun 25 16:32:18.492644 systemd[1]: run-containerd-runc-k8s.io-3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d-runc.R1P6p4.mount: Deactivated successfully. Jun 25 16:32:18.503201 systemd[1]: Started cri-containerd-3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d.scope - libcontainer container 3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d. Jun 25 16:32:18.518000 audit: BPF prog-id=64 op=LOAD Jun 25 16:32:18.519000 audit: BPF prog-id=65 op=LOAD Jun 25 16:32:18.519000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2006 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636130313961643933663464666164396561333730316232306537 Jun 25 16:32:18.520000 audit: BPF prog-id=66 op=LOAD Jun 25 16:32:18.520000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2006 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636130313961643933663464666164396561333730316232306537 Jun 25 16:32:18.520000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:32:18.520000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:32:18.520000 audit: BPF prog-id=67 op=LOAD Jun 25 16:32:18.520000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2006 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636130313961643933663464666164396561333730316232306537 Jun 25 16:32:18.614611 containerd[1288]: time="2024-06-25T16:32:18.614504125Z" level=info msg="CreateContainer within sandbox \"6a5ee653f3c3aa1948a3e2008caf2c3e42bd9060dcec05e40d40634f282c643a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59865ff31fdf3b775adb21466ecde8ec70dff67ea448c444215f37e614d82e42\"" Jun 25 16:32:18.614999 containerd[1288]: time="2024-06-25T16:32:18.614522510Z" level=info msg="StartContainer for \"3cca019ad93f4dfad9ea3701b20e721c7171244dcf98e75bf2e0a1eeb00a2b0d\" returns successfully" Jun 25 16:32:18.617788 containerd[1288]: time="2024-06-25T16:32:18.615121936Z" level=info msg="CreateContainer within sandbox \"4ba56bc406df5be97bc72c17e08af22045b0b632b5f4a887eda1f8327549f59b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41802fc81cd862b3c326520e02439a51ac4437e604f2d23d1e01ddf6e13e5d75\"" Jun 25 16:32:18.617788 containerd[1288]: time="2024-06-25T16:32:18.615421334Z" level=info msg="StartContainer for \"59865ff31fdf3b775adb21466ecde8ec70dff67ea448c444215f37e614d82e42\"" Jun 25 16:32:18.617788 containerd[1288]: time="2024-06-25T16:32:18.615915791Z" level=info msg="StartContainer for \"41802fc81cd862b3c326520e02439a51ac4437e604f2d23d1e01ddf6e13e5d75\"" Jun 25 16:32:18.680411 systemd[1]: Started cri-containerd-59865ff31fdf3b775adb21466ecde8ec70dff67ea448c444215f37e614d82e42.scope - libcontainer container 59865ff31fdf3b775adb21466ecde8ec70dff67ea448c444215f37e614d82e42. Jun 25 16:32:18.693466 systemd[1]: Started cri-containerd-41802fc81cd862b3c326520e02439a51ac4437e604f2d23d1e01ddf6e13e5d75.scope - libcontainer container 41802fc81cd862b3c326520e02439a51ac4437e604f2d23d1e01ddf6e13e5d75. Jun 25 16:32:18.695362 kubelet[1928]: W0625 16:32:18.694565 1928 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:18.695362 kubelet[1928]: E0625 16:32:18.694634 1928 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 25 16:32:18.703000 audit: BPF prog-id=68 op=LOAD Jun 25 16:32:18.705000 audit: BPF prog-id=69 op=LOAD Jun 25 16:32:18.705000 audit[2159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2005 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539383635666633316664663362373735616462323134363665636465 Jun 25 16:32:18.705000 audit: BPF prog-id=70 op=LOAD Jun 25 16:32:18.705000 audit[2159]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2005 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539383635666633316664663362373735616462323134363665636465 Jun 25 16:32:18.705000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:32:18.705000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:32:18.705000 audit: BPF prog-id=71 op=LOAD Jun 25 16:32:18.705000 audit[2159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2005 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539383635666633316664663362373735616462323134363665636465 Jun 25 16:32:18.716000 audit: BPF prog-id=72 op=LOAD Jun 25 16:32:18.716000 audit: BPF prog-id=73 op=LOAD Jun 25 16:32:18.716000 audit[2160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2007 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431383032666338316364383632623363333236353230653032343339 Jun 25 16:32:18.716000 audit: BPF prog-id=74 op=LOAD Jun 25 16:32:18.716000 audit[2160]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2007 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431383032666338316364383632623363333236353230653032343339 Jun 25 16:32:18.716000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:32:18.716000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:32:18.716000 audit: BPF prog-id=75 op=LOAD Jun 25 16:32:18.716000 audit[2160]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2007 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:18.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431383032666338316364383632623363333236353230653032343339 Jun 25 16:32:18.900078 containerd[1288]: time="2024-06-25T16:32:18.899984625Z" level=info msg="StartContainer for \"59865ff31fdf3b775adb21466ecde8ec70dff67ea448c444215f37e614d82e42\" returns successfully" Jun 25 16:32:18.900349 containerd[1288]: time="2024-06-25T16:32:18.900164967Z" level=info msg="StartContainer for \"41802fc81cd862b3c326520e02439a51ac4437e604f2d23d1e01ddf6e13e5d75\" returns successfully" Jun 25 16:32:19.392715 kubelet[1928]: E0625 16:32:19.392689 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:19.395057 kubelet[1928]: E0625 16:32:19.395045 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:19.396830 kubelet[1928]: E0625 16:32:19.396814 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:20.237000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.237000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000c1a000 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:20.237000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:20.240000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.240000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000f02020 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:20.240000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:20.405049 kubelet[1928]: E0625 16:32:20.405011 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:20.405492 kubelet[1928]: E0625 16:32:20.405472 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:20.406569 kubelet[1928]: E0625 16:32:20.406254 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:20.995000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.995000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0045ec000 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:20.995000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:20.995000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.995000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c005c12040 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:20.995000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:20.996000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.996000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0070432f0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:20.996000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:20.997000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6279 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:20.997000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4a a1=c0045ecea0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:20.997000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:21.018000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:21.018000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5e a1=c004d26c40 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:21.018000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:21.018000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:21.018000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5e a1=c005e6fda0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:32:21.018000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:32:21.133336 kubelet[1928]: E0625 16:32:21.132992 1928 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c5f61e0afe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 296405988, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'namespaces "default" not found' (will not retry!) Jun 25 16:32:21.193237 kubelet[1928]: E0625 16:32:21.193109 1928 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c5f62fedeee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 315161326, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 315161326, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'namespaces "default" not found' (will not retry!) Jun 25 16:32:21.249438 kubelet[1928]: E0625 16:32:21.248973 1928 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c5f64f2b68d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 347918989, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 32, 10, 347918989, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'namespaces "default" not found' (will not retry!) Jun 25 16:32:21.288952 kubelet[1928]: I0625 16:32:21.288262 1928 apiserver.go:52] "Watching apiserver" Jun 25 16:32:21.317299 kubelet[1928]: I0625 16:32:21.317223 1928 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:32:21.405711 kubelet[1928]: E0625 16:32:21.405657 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:21.500763 kubelet[1928]: E0625 16:32:21.498320 1928 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:32:21.938935 kubelet[1928]: E0625 16:32:21.938805 1928 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:32:22.005316 kubelet[1928]: E0625 16:32:22.004695 1928 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:32:22.516097 kubelet[1928]: E0625 16:32:22.515960 1928 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:32:22.965706 kubelet[1928]: E0625 16:32:22.965571 1928 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:32:23.063255 kubelet[1928]: I0625 16:32:23.062340 1928 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:23.074092 kubelet[1928]: I0625 16:32:23.074034 1928 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:32:24.174604 kubelet[1928]: E0625 16:32:24.174546 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:24.417162 kubelet[1928]: E0625 16:32:24.416630 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:25.519403 systemd[1]: Reloading. Jun 25 16:32:25.703255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:32:25.797000 audit: BPF prog-id=76 op=LOAD Jun 25 16:32:25.802127 kernel: kauditd_printk_skb: 86 callbacks suppressed Jun 25 16:32:25.802226 kernel: audit: type=1334 audit(1719333145.797:329): prog-id=76 op=LOAD Jun 25 16:32:25.798000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:32:25.809596 kernel: audit: type=1334 audit(1719333145.798:330): prog-id=38 op=UNLOAD Jun 25 16:32:25.809729 kernel: audit: type=1334 audit(1719333145.800:331): prog-id=77 op=LOAD Jun 25 16:32:25.809766 kernel: audit: type=1334 audit(1719333145.800:332): prog-id=78 op=LOAD Jun 25 16:32:25.809788 kernel: audit: type=1334 audit(1719333145.800:333): prog-id=39 op=UNLOAD Jun 25 16:32:25.800000 audit: BPF prog-id=77 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=78 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:32:25.800000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:32:25.811961 kernel: audit: type=1334 audit(1719333145.800:334): prog-id=40 op=UNLOAD Jun 25 16:32:25.812108 kernel: audit: type=1334 audit(1719333145.800:335): prog-id=79 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=79 op=LOAD Jun 25 16:32:25.813153 kernel: audit: type=1334 audit(1719333145.800:336): prog-id=72 op=UNLOAD Jun 25 16:32:25.800000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:32:25.814991 kernel: audit: type=1334 audit(1719333145.800:337): prog-id=80 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=80 op=LOAD Jun 25 16:32:25.815335 kernel: audit: type=1334 audit(1719333145.800:338): prog-id=81 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=81 op=LOAD Jun 25 16:32:25.800000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:32:25.801000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:32:25.801000 audit: BPF prog-id=82 op=LOAD Jun 25 16:32:25.801000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:32:25.803000 audit: BPF prog-id=83 op=LOAD Jun 25 16:32:25.803000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:32:25.806000 audit: BPF prog-id=84 op=LOAD Jun 25 16:32:25.806000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:32:25.807000 audit: BPF prog-id=85 op=LOAD Jun 25 16:32:25.807000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:32:25.807000 audit: BPF prog-id=86 op=LOAD Jun 25 16:32:25.807000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:32:25.807000 audit: BPF prog-id=87 op=LOAD Jun 25 16:32:25.808000 audit: BPF prog-id=88 op=LOAD Jun 25 16:32:25.808000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:32:25.808000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:32:25.808000 audit: BPF prog-id=89 op=LOAD Jun 25 16:32:25.808000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:32:25.810000 audit: BPF prog-id=90 op=LOAD Jun 25 16:32:25.810000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:32:25.811000 audit: BPF prog-id=91 op=LOAD Jun 25 16:32:25.811000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:32:25.814000 audit: BPF prog-id=92 op=LOAD Jun 25 16:32:25.814000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:32:25.815000 audit: BPF prog-id=93 op=LOAD Jun 25 16:32:25.815000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:32:25.815000 audit: BPF prog-id=94 op=LOAD Jun 25 16:32:25.815000 audit: BPF prog-id=95 op=LOAD Jun 25 16:32:25.815000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:32:25.815000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:32:25.831289 kubelet[1928]: I0625 16:32:25.831253 1928 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:32:25.831327 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:25.847237 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:32:25.847505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:25.847611 systemd[1]: kubelet.service: Consumed 1.507s CPU time. Jun 25 16:32:25.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:25.857628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:25.995261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:25.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:26.084628 kubelet[2286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:32:26.084628 kubelet[2286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:32:26.084628 kubelet[2286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:32:26.084628 kubelet[2286]: I0625 16:32:26.083551 2286 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:32:26.088790 kubelet[2286]: I0625 16:32:26.088209 2286 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:32:26.088790 kubelet[2286]: I0625 16:32:26.088244 2286 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:32:26.088790 kubelet[2286]: I0625 16:32:26.088467 2286 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:32:26.091926 kubelet[2286]: I0625 16:32:26.090179 2286 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:32:26.091926 kubelet[2286]: I0625 16:32:26.091381 2286 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:32:26.104295 kubelet[2286]: I0625 16:32:26.104252 2286 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:32:26.104547 kubelet[2286]: I0625 16:32:26.104470 2286 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104679 2286 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104714 2286 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104725 2286 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104786 2286 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104885 2286 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104899 2286 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:32:26.108800 kubelet[2286]: I0625 16:32:26.104924 2286 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:32:26.109189 kubelet[2286]: I0625 16:32:26.104940 2286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:32:26.109189 kubelet[2286]: I0625 16:32:26.106258 2286 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:32:26.109189 kubelet[2286]: I0625 16:32:26.106793 2286 server.go:1232] "Started kubelet" Jun 25 16:32:26.109189 kubelet[2286]: I0625 16:32:26.108476 2286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:32:26.110723 kubelet[2286]: E0625 16:32:26.110681 2286 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:32:26.110803 kubelet[2286]: E0625 16:32:26.110740 2286 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:32:26.115363 kubelet[2286]: I0625 16:32:26.115243 2286 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:32:26.115924 kubelet[2286]: I0625 16:32:26.115899 2286 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:32:26.116444 kubelet[2286]: I0625 16:32:26.116353 2286 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:32:26.127101 kubelet[2286]: I0625 16:32:26.117071 2286 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:32:26.127830 kubelet[2286]: I0625 16:32:26.118223 2286 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:32:26.127830 kubelet[2286]: I0625 16:32:26.120304 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:32:26.128436 kubelet[2286]: I0625 16:32:26.120379 2286 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:32:26.130246 kubelet[2286]: I0625 16:32:26.130224 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:32:26.130350 kubelet[2286]: I0625 16:32:26.130339 2286 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:32:26.130428 kubelet[2286]: I0625 16:32:26.130417 2286 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:32:26.130571 kubelet[2286]: E0625 16:32:26.130559 2286 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:32:26.135733 kubelet[2286]: I0625 16:32:26.130454 2286 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:32:26.216434 kubelet[2286]: I0625 16:32:26.215564 2286 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:32:26.216434 kubelet[2286]: I0625 16:32:26.216203 2286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:32:26.217543 kubelet[2286]: I0625 16:32:26.216876 2286 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:32:26.217543 kubelet[2286]: I0625 16:32:26.217089 2286 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:32:26.217543 kubelet[2286]: I0625 16:32:26.217197 2286 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:32:26.217543 kubelet[2286]: I0625 16:32:26.217227 2286 policy_none.go:49] "None policy: Start" Jun 25 16:32:26.219476 kubelet[2286]: I0625 16:32:26.219453 2286 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:32:26.219551 kubelet[2286]: I0625 16:32:26.219498 2286 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:32:26.219671 kubelet[2286]: I0625 16:32:26.219651 2286 state_mem.go:75] "Updated machine memory state" Jun 25 16:32:26.227128 kubelet[2286]: I0625 16:32:26.226934 2286 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:32:26.228557 kubelet[2286]: I0625 16:32:26.227682 2286 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:32:26.228557 kubelet[2286]: I0625 16:32:26.228044 2286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:32:26.237112 kubelet[2286]: I0625 16:32:26.237042 2286 topology_manager.go:215] "Topology Admit Handler" podUID="3631274fb2cd218c48d2734d776415f9" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:32:26.237280 kubelet[2286]: I0625 16:32:26.237158 2286 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:32:26.237280 kubelet[2286]: I0625 16:32:26.237200 2286 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:32:26.381611 kubelet[2286]: E0625 16:32:26.379408 2286 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.397082 kubelet[2286]: I0625 16:32:26.397048 2286 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 16:32:26.394000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=6304 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:32:26.394000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009f2c00 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:26.394000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:26.398406 kubelet[2286]: I0625 16:32:26.397823 2286 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:32:26.429772 kubelet[2286]: I0625 16:32:26.429135 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:32:26.429772 kubelet[2286]: I0625 16:32:26.429253 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:26.429772 kubelet[2286]: I0625 16:32:26.429287 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.429772 kubelet[2286]: I0625 16:32:26.429330 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.429772 kubelet[2286]: I0625 16:32:26.429372 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.430077 kubelet[2286]: I0625 16:32:26.429419 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.430077 kubelet[2286]: I0625 16:32:26.429447 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:26.430077 kubelet[2286]: I0625 16:32:26.429503 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3631274fb2cd218c48d2734d776415f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3631274fb2cd218c48d2734d776415f9\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:26.430077 kubelet[2286]: I0625 16:32:26.429540 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:32:26.580241 kubelet[2286]: E0625 16:32:26.580172 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:26.587953 kubelet[2286]: E0625 16:32:26.585182 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:26.683892 kubelet[2286]: E0625 16:32:26.682085 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:27.109439 kubelet[2286]: I0625 16:32:27.108308 2286 apiserver.go:52] "Watching apiserver" Jun 25 16:32:27.137536 kubelet[2286]: I0625 16:32:27.137418 2286 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:32:27.153249 kubelet[2286]: E0625 16:32:27.152770 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:27.153249 kubelet[2286]: E0625 16:32:27.153179 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:27.182242 kubelet[2286]: E0625 16:32:27.181128 2286 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:32:27.182242 kubelet[2286]: E0625 16:32:27.181627 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:27.215895 kubelet[2286]: I0625 16:32:27.215810 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.215738872 podCreationTimestamp="2024-06-25 16:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:32:27.182063965 +0000 UTC m=+1.182343457" watchObservedRunningTime="2024-06-25 16:32:27.215738872 +0000 UTC m=+1.216018364" Jun 25 16:32:27.246730 kubelet[2286]: I0625 16:32:27.246450 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.246400797 podCreationTimestamp="2024-06-25 16:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:32:27.218576655 +0000 UTC m=+1.218856177" watchObservedRunningTime="2024-06-25 16:32:27.246400797 +0000 UTC m=+1.246680289" Jun 25 16:32:27.273064 kubelet[2286]: I0625 16:32:27.273016 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.27296899 podCreationTimestamp="2024-06-25 16:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:32:27.247594729 +0000 UTC m=+1.247874211" watchObservedRunningTime="2024-06-25 16:32:27.27296899 +0000 UTC m=+1.273248482" Jun 25 16:32:28.155013 kubelet[2286]: E0625 16:32:28.154983 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:28.581000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:28.582000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:28.582000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0013849a0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:28.582000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:28.582000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:28.582000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0013849e0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:28.582000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:28.583000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:28.583000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001384d20 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:28.583000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:28.581000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0011e77a0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:32:28.581000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:29.158010 kubelet[2286]: E0625 16:32:29.157978 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:30.162231 kubelet[2286]: E0625 16:32:30.162164 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:32.069167 sudo[1429]: pam_unix(sudo:session): session closed for user root Jun 25 16:32:32.068000 audit[1429]: USER_END pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.070464 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 16:32:32.070549 kernel: audit: type=1106 audit(1719333152.068:376): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.068000 audit[1429]: CRED_DISP pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.077003 kernel: audit: type=1104 audit(1719333152.068:377): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.081545 sshd[1426]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:32.083000 audit[1426]: USER_END pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:32:32.086878 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:53056.service: Deactivated successfully. Jun 25 16:32:32.087810 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:32:32.088766 kernel: audit: type=1106 audit(1719333152.083:378): pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:32:32.088013 systemd[1]: session-7.scope: Consumed 5.882s CPU time. Jun 25 16:32:32.088799 systemd-logind[1274]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:32:32.084000 audit[1426]: CRED_DISP pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:32:32.092210 kernel: audit: type=1104 audit(1719333152.084:379): pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:32:32.092327 kernel: audit: type=1131 audit(1719333152.085:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:53056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:53056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:32.091651 systemd-logind[1274]: Removed session 7. Jun 25 16:32:34.250053 kubelet[2286]: E0625 16:32:34.250011 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:34.729604 kubelet[2286]: E0625 16:32:34.726938 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:35.178770 kubelet[2286]: E0625 16:32:35.178626 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:35.180038 kubelet[2286]: E0625 16:32:35.180018 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:38.805395 kubelet[2286]: I0625 16:32:38.805358 2286 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:32:38.805936 containerd[1288]: time="2024-06-25T16:32:38.805857219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:32:38.806256 kubelet[2286]: I0625 16:32:38.806055 2286 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:32:39.064923 kubelet[2286]: I0625 16:32:39.064024 2286 topology_manager.go:215] "Topology Admit Handler" podUID="f7077df9-36be-438c-b0c2-4344cdddd573" podNamespace="kube-system" podName="kube-proxy-969nx" Jun 25 16:32:39.086143 systemd[1]: Created slice kubepods-besteffort-podf7077df9_36be_438c_b0c2_4344cdddd573.slice - libcontainer container kubepods-besteffort-podf7077df9_36be_438c_b0c2_4344cdddd573.slice. Jun 25 16:32:39.191983 kubelet[2286]: I0625 16:32:39.191634 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvgm\" (UniqueName: \"kubernetes.io/projected/f7077df9-36be-438c-b0c2-4344cdddd573-kube-api-access-9qvgm\") pod \"kube-proxy-969nx\" (UID: \"f7077df9-36be-438c-b0c2-4344cdddd573\") " pod="kube-system/kube-proxy-969nx" Jun 25 16:32:39.191983 kubelet[2286]: I0625 16:32:39.191740 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7077df9-36be-438c-b0c2-4344cdddd573-xtables-lock\") pod \"kube-proxy-969nx\" (UID: \"f7077df9-36be-438c-b0c2-4344cdddd573\") " pod="kube-system/kube-proxy-969nx" Jun 25 16:32:39.191983 kubelet[2286]: I0625 16:32:39.191802 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7077df9-36be-438c-b0c2-4344cdddd573-kube-proxy\") pod \"kube-proxy-969nx\" (UID: \"f7077df9-36be-438c-b0c2-4344cdddd573\") " pod="kube-system/kube-proxy-969nx" Jun 25 16:32:39.191983 kubelet[2286]: I0625 16:32:39.191842 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7077df9-36be-438c-b0c2-4344cdddd573-lib-modules\") pod \"kube-proxy-969nx\" (UID: \"f7077df9-36be-438c-b0c2-4344cdddd573\") " pod="kube-system/kube-proxy-969nx" Jun 25 16:32:39.258779 kubelet[2286]: I0625 16:32:39.258188 2286 topology_manager.go:215] "Topology Admit Handler" podUID="0d28ddc0-9f79-4be3-897f-adf35c941096" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-xcvvg" Jun 25 16:32:39.265667 systemd[1]: Created slice kubepods-besteffort-pod0d28ddc0_9f79_4be3_897f_adf35c941096.slice - libcontainer container kubepods-besteffort-pod0d28ddc0_9f79_4be3_897f_adf35c941096.slice. Jun 25 16:32:39.394854 kubelet[2286]: I0625 16:32:39.393949 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d28ddc0-9f79-4be3-897f-adf35c941096-var-lib-calico\") pod \"tigera-operator-76c4974c85-xcvvg\" (UID: \"0d28ddc0-9f79-4be3-897f-adf35c941096\") " pod="tigera-operator/tigera-operator-76c4974c85-xcvvg" Jun 25 16:32:39.394854 kubelet[2286]: I0625 16:32:39.394020 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqbfp\" (UniqueName: \"kubernetes.io/projected/0d28ddc0-9f79-4be3-897f-adf35c941096-kube-api-access-mqbfp\") pod \"tigera-operator-76c4974c85-xcvvg\" (UID: \"0d28ddc0-9f79-4be3-897f-adf35c941096\") " pod="tigera-operator/tigera-operator-76c4974c85-xcvvg" Jun 25 16:32:39.404178 kubelet[2286]: E0625 16:32:39.397167 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:39.404416 containerd[1288]: time="2024-06-25T16:32:39.398374752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-969nx,Uid:f7077df9-36be-438c-b0c2-4344cdddd573,Namespace:kube-system,Attempt:0,}" Jun 25 16:32:39.573526 containerd[1288]: time="2024-06-25T16:32:39.572917658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xcvvg,Uid:0d28ddc0-9f79-4be3-897f-adf35c941096,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:32:40.329764 containerd[1288]: time="2024-06-25T16:32:40.329378481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:40.329764 containerd[1288]: time="2024-06-25T16:32:40.329430009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:40.329764 containerd[1288]: time="2024-06-25T16:32:40.329448604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:40.329764 containerd[1288]: time="2024-06-25T16:32:40.329459805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:40.351432 systemd[1]: run-containerd-runc-k8s.io-8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629-runc.HXwQvO.mount: Deactivated successfully. Jun 25 16:32:40.365298 systemd[1]: Started cri-containerd-8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629.scope - libcontainer container 8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629. Jun 25 16:32:40.375000 audit: BPF prog-id=96 op=LOAD Jun 25 16:32:40.376000 audit: BPF prog-id=97 op=LOAD Jun 25 16:32:40.378846 kernel: audit: type=1334 audit(1719333160.375:381): prog-id=96 op=LOAD Jun 25 16:32:40.379002 kernel: audit: type=1334 audit(1719333160.376:382): prog-id=97 op=LOAD Jun 25 16:32:40.379038 kernel: audit: type=1300 audit(1719333160.376:382): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2386 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.376000 audit[2396]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2386 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.382557 kernel: audit: type=1327 audit(1719333160.376:382): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343861613434613535303866653930626138623963636165653239 Jun 25 16:32:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343861613434613535303866653930626138623963636165653239 Jun 25 16:32:40.376000 audit: BPF prog-id=98 op=LOAD Jun 25 16:32:40.388169 kernel: audit: type=1334 audit(1719333160.376:383): prog-id=98 op=LOAD Jun 25 16:32:40.388455 kernel: audit: type=1300 audit(1719333160.376:383): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2386 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.376000 audit[2396]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2386 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343861613434613535303866653930626138623963636165653239 Jun 25 16:32:40.399843 kernel: audit: type=1327 audit(1719333160.376:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343861613434613535303866653930626138623963636165653239 Jun 25 16:32:40.400006 kernel: audit: type=1334 audit(1719333160.376:384): prog-id=98 op=UNLOAD Jun 25 16:32:40.400051 kernel: audit: type=1334 audit(1719333160.376:385): prog-id=97 op=UNLOAD Jun 25 16:32:40.376000 audit: BPF prog-id=98 op=UNLOAD Jun 25 16:32:40.376000 audit: BPF prog-id=97 op=UNLOAD Jun 25 16:32:40.401444 kernel: audit: type=1334 audit(1719333160.376:386): prog-id=99 op=LOAD Jun 25 16:32:40.376000 audit: BPF prog-id=99 op=LOAD Jun 25 16:32:40.376000 audit[2396]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2386 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343861613434613535303866653930626138623963636165653239 Jun 25 16:32:40.412099 containerd[1288]: time="2024-06-25T16:32:40.412037841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-969nx,Uid:f7077df9-36be-438c-b0c2-4344cdddd573,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629\"" Jun 25 16:32:40.412611 containerd[1288]: time="2024-06-25T16:32:40.412306175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:40.412611 containerd[1288]: time="2024-06-25T16:32:40.412410131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:40.412611 containerd[1288]: time="2024-06-25T16:32:40.412451068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:40.412611 containerd[1288]: time="2024-06-25T16:32:40.412467338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:40.413156 kubelet[2286]: E0625 16:32:40.413134 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:40.417804 containerd[1288]: time="2024-06-25T16:32:40.416846520Z" level=info msg="CreateContainer within sandbox \"8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:32:40.443992 systemd[1]: Started cri-containerd-14b16005998c8a22e785407c033eb8d289ff5be4418a594ee9a414c8422c5183.scope - libcontainer container 14b16005998c8a22e785407c033eb8d289ff5be4418a594ee9a414c8422c5183. Jun 25 16:32:40.455000 audit: BPF prog-id=100 op=LOAD Jun 25 16:32:40.455000 audit: BPF prog-id=101 op=LOAD Jun 25 16:32:40.455000 audit[2436]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2421 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134623136303035393938633861323265373835343037633033336562 Jun 25 16:32:40.456000 audit: BPF prog-id=102 op=LOAD Jun 25 16:32:40.456000 audit[2436]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2421 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134623136303035393938633861323265373835343037633033336562 Jun 25 16:32:40.456000 audit: BPF prog-id=102 op=UNLOAD Jun 25 16:32:40.456000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:32:40.456000 audit: BPF prog-id=103 op=LOAD Jun 25 16:32:40.456000 audit[2436]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2421 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134623136303035393938633861323265373835343037633033336562 Jun 25 16:32:40.499623 containerd[1288]: time="2024-06-25T16:32:40.499570962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xcvvg,Uid:0d28ddc0-9f79-4be3-897f-adf35c941096,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14b16005998c8a22e785407c033eb8d289ff5be4418a594ee9a414c8422c5183\"" Jun 25 16:32:40.502988 containerd[1288]: time="2024-06-25T16:32:40.502952989Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:32:41.899327 containerd[1288]: time="2024-06-25T16:32:41.898734419Z" level=info msg="CreateContainer within sandbox \"8b48aa44a5508fe90ba8b9ccaee297bf202b195f7df2fcf7d79c639be1b28629\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"041b5641790be536af4f7b13db375db8dc97fb05e5a4eca47826e257a6dfaa4c\"" Jun 25 16:32:41.900944 containerd[1288]: time="2024-06-25T16:32:41.900891473Z" level=info msg="StartContainer for \"041b5641790be536af4f7b13db375db8dc97fb05e5a4eca47826e257a6dfaa4c\"" Jun 25 16:32:41.946965 systemd[1]: Started cri-containerd-041b5641790be536af4f7b13db375db8dc97fb05e5a4eca47826e257a6dfaa4c.scope - libcontainer container 041b5641790be536af4f7b13db375db8dc97fb05e5a4eca47826e257a6dfaa4c. Jun 25 16:32:42.016000 audit: BPF prog-id=104 op=LOAD Jun 25 16:32:42.016000 audit[2467]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2386 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034316235363431373930626535333661663466376231336462333735 Jun 25 16:32:42.016000 audit: BPF prog-id=105 op=LOAD Jun 25 16:32:42.016000 audit[2467]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2386 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034316235363431373930626535333661663466376231336462333735 Jun 25 16:32:42.018000 audit: BPF prog-id=105 op=UNLOAD Jun 25 16:32:42.018000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:32:42.018000 audit: BPF prog-id=106 op=LOAD Jun 25 16:32:42.018000 audit[2467]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2386 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034316235363431373930626535333661663466376231336462333735 Jun 25 16:32:42.108240 containerd[1288]: time="2024-06-25T16:32:42.106110095Z" level=info msg="StartContainer for \"041b5641790be536af4f7b13db375db8dc97fb05e5a4eca47826e257a6dfaa4c\" returns successfully" Jun 25 16:32:42.189000 audit[2520]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.189000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffee476800 a2=0 a3=7fffee4767ec items=0 ppid=2478 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:32:42.189000 audit[2521]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.189000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3e7cdeb0 a2=0 a3=7fff3e7cde9c items=0 ppid=2478 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:32:42.193000 audit[2522]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.193000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0aacc790 a2=0 a3=7fff0aacc77c items=0 ppid=2478 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:32:42.195000 audit[2524]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.195000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff482a9ea0 a2=0 a3=7fff482a9e8c items=0 ppid=2478 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:32:42.195000 audit[2523]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.195000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3ec28d20 a2=0 a3=7fff3ec28d0c items=0 ppid=2478 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:32:42.198000 audit[2525]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.198000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef8690770 a2=0 a3=7ffef869075c items=0 ppid=2478 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:32:42.229836 kubelet[2286]: E0625 16:32:42.229789 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:42.256614 kubelet[2286]: I0625 16:32:42.256271 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-969nx" podStartSLOduration=4.255217734 podCreationTimestamp="2024-06-25 16:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:32:42.254381632 +0000 UTC m=+16.254661154" watchObservedRunningTime="2024-06-25 16:32:42.255217734 +0000 UTC m=+16.255497226" Jun 25 16:32:42.297000 audit[2526]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.297000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc4cf2c950 a2=0 a3=7ffc4cf2c93c items=0 ppid=2478 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:32:42.303000 audit[2528]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.303000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc1a64a8d0 a2=0 a3=7ffc1a64a8bc items=0 ppid=2478 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:32:42.309000 audit[2531]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.309000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeae7310d0 a2=0 a3=7ffeae7310bc items=0 ppid=2478 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:32:42.311000 audit[2532]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.311000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe533d8f00 a2=0 a3=7ffe533d8eec items=0 ppid=2478 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:32:42.319000 audit[2534]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.319000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc5e0a100 a2=0 a3=7fffc5e0a0ec items=0 ppid=2478 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:32:42.321000 audit[2535]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.321000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc98b978d0 a2=0 a3=7ffc98b978bc items=0 ppid=2478 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:32:42.324000 audit[2537]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.324000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe16e16410 a2=0 a3=7ffe16e163fc items=0 ppid=2478 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:32:42.329000 audit[2540]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.329000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb9fcbdc0 a2=0 a3=7ffcb9fcbdac items=0 ppid=2478 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:32:42.331000 audit[2541]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.331000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc19514200 a2=0 a3=7ffc195141ec items=0 ppid=2478 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:32:42.334000 audit[2543]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.334000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffebc3c77c0 a2=0 a3=7ffebc3c77ac items=0 ppid=2478 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:32:42.337000 audit[2544]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.337000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1c0c6730 a2=0 a3=7fff1c0c671c items=0 ppid=2478 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:32:42.341000 audit[2546]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.341000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7745a2c0 a2=0 a3=7fff7745a2ac items=0 ppid=2478 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:32:42.349000 audit[2549]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.349000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0d004a30 a2=0 a3=7ffd0d004a1c items=0 ppid=2478 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:32:42.356000 audit[2552]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.356000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf89d9670 a2=0 a3=7ffcf89d965c items=0 ppid=2478 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:32:42.358000 audit[2553]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.358000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3fa98790 a2=0 a3=7ffe3fa9877c items=0 ppid=2478 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:32:42.362000 audit[2555]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.362000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb0ee64d0 a2=0 a3=7ffdb0ee64bc items=0 ppid=2478 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:32:42.371000 audit[2558]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.371000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe5e8f4e0 a2=0 a3=7fffe5e8f4cc items=0 ppid=2478 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:32:42.374000 audit[2559]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.374000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3d5584f0 a2=0 a3=7fff3d5584dc items=0 ppid=2478 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:32:42.378000 audit[2561]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:32:42.378000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe1837ec60 a2=0 a3=7ffe1837ec4c items=0 ppid=2478 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:32:42.447000 audit[2567]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:42.447000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe8200ab20 a2=0 a3=7ffe8200ab0c items=0 ppid=2478 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:42.457000 audit[2567]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:42.457000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe8200ab20 a2=0 a3=7ffe8200ab0c items=0 ppid=2478 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:42.460000 audit[2573]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.460000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec78b4e00 a2=0 a3=7ffec78b4dec items=0 ppid=2478 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:32:42.474000 audit[2575]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.474000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd48e2faf0 a2=0 a3=7ffd48e2fadc items=0 ppid=2478 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:32:42.480000 audit[2578]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.480000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc7e928f50 a2=0 a3=7ffc7e928f3c items=0 ppid=2478 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:32:42.483000 audit[2579]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.483000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff774a12a0 a2=0 a3=7fff774a128c items=0 ppid=2478 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:32:42.487000 audit[2581]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.487000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb8511c70 a2=0 a3=7ffcb8511c5c items=0 ppid=2478 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:32:42.489000 audit[2582]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.489000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0d08a2b0 a2=0 a3=7ffd0d08a29c items=0 ppid=2478 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:32:42.499000 audit[2584]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.499000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff11867470 a2=0 a3=7fff1186745c items=0 ppid=2478 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:32:42.511000 audit[2587]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.511000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffea3a6aa80 a2=0 a3=7ffea3a6aa6c items=0 ppid=2478 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:32:42.512000 audit[2588]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.512000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb1f9cdf0 a2=0 a3=7ffeb1f9cddc items=0 ppid=2478 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:32:42.522000 audit[2590]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.522000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6b8d5440 a2=0 a3=7ffd6b8d542c items=0 ppid=2478 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:32:42.526000 audit[2591]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.526000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd62b8fbf0 a2=0 a3=7ffd62b8fbdc items=0 ppid=2478 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:32:42.532000 audit[2593]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.532000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7d7ce180 a2=0 a3=7ffc7d7ce16c items=0 ppid=2478 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:32:42.538000 audit[2596]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.538000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4bb4ad80 a2=0 a3=7ffd4bb4ad6c items=0 ppid=2478 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:32:42.543000 audit[2599]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.543000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc9e3f140 a2=0 a3=7fffc9e3f12c items=0 ppid=2478 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:32:42.545000 audit[2600]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.545000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff236d590 a2=0 a3=7ffff236d57c items=0 ppid=2478 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:32:42.549000 audit[2602]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.549000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe7de8e250 a2=0 a3=7ffe7de8e23c items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:32:42.554000 audit[2605]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.554000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffe10f5970 a2=0 a3=7fffe10f595c items=0 ppid=2478 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:32:42.555000 audit[2606]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.555000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77032c30 a2=0 a3=7ffe77032c1c items=0 ppid=2478 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:32:42.557000 audit[2608]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.557000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffa05f8830 a2=0 a3=7fffa05f881c items=0 ppid=2478 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:32:42.558000 audit[2609]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.558000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe34216e0 a2=0 a3=7fffe34216cc items=0 ppid=2478 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:32:42.562000 audit[2611]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.562000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe5f3fc1b0 a2=0 a3=7ffe5f3fc19c items=0 ppid=2478 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:32:42.566000 audit[2614]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:32:42.566000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf35b9270 a2=0 a3=7ffdf35b925c items=0 ppid=2478 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:32:42.569000 audit[2616]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:32:42.569000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc9eebdc00 a2=0 a3=7ffc9eebdbec items=0 ppid=2478 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.569000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:42.570000 audit[2616]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:32:42.570000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc9eebdc00 a2=0 a3=7ffc9eebdbec items=0 ppid=2478 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.570000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:43.240320 kubelet[2286]: E0625 16:32:43.240286 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:43.589614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030321085.mount: Deactivated successfully. Jun 25 16:32:44.796609 containerd[1288]: time="2024-06-25T16:32:44.795434062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:44.806633 containerd[1288]: time="2024-06-25T16:32:44.806434267Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076100" Jun 25 16:32:44.808979 containerd[1288]: time="2024-06-25T16:32:44.808912792Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:44.814306 containerd[1288]: time="2024-06-25T16:32:44.814105717Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:44.818284 containerd[1288]: time="2024-06-25T16:32:44.817301771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:44.818486 containerd[1288]: time="2024-06-25T16:32:44.818431944Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 4.315433901s" Jun 25 16:32:44.818539 containerd[1288]: time="2024-06-25T16:32:44.818486076Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:32:44.827435 containerd[1288]: time="2024-06-25T16:32:44.827177773Z" level=info msg="CreateContainer within sandbox \"14b16005998c8a22e785407c033eb8d289ff5be4418a594ee9a414c8422c5183\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:32:44.871463 containerd[1288]: time="2024-06-25T16:32:44.871261304Z" level=info msg="CreateContainer within sandbox \"14b16005998c8a22e785407c033eb8d289ff5be4418a594ee9a414c8422c5183\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4f958169d5b6195a3b3ca3d0c2cf31abf35a73b3d008977b873823f5fc50360c\"" Jun 25 16:32:44.874009 containerd[1288]: time="2024-06-25T16:32:44.873115717Z" level=info msg="StartContainer for \"4f958169d5b6195a3b3ca3d0c2cf31abf35a73b3d008977b873823f5fc50360c\"" Jun 25 16:32:44.942328 systemd[1]: Started cri-containerd-4f958169d5b6195a3b3ca3d0c2cf31abf35a73b3d008977b873823f5fc50360c.scope - libcontainer container 4f958169d5b6195a3b3ca3d0c2cf31abf35a73b3d008977b873823f5fc50360c. Jun 25 16:32:44.973000 audit: BPF prog-id=107 op=LOAD Jun 25 16:32:44.974000 audit: BPF prog-id=108 op=LOAD Jun 25 16:32:44.974000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=2421 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:44.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466393538313639643562363139356133623363613364306332636633 Jun 25 16:32:44.975000 audit: BPF prog-id=109 op=LOAD Jun 25 16:32:44.975000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=2421 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:44.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466393538313639643562363139356133623363613364306332636633 Jun 25 16:32:44.975000 audit: BPF prog-id=109 op=UNLOAD Jun 25 16:32:44.975000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:32:44.975000 audit: BPF prog-id=110 op=LOAD Jun 25 16:32:44.975000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=2421 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:44.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466393538313639643562363139356133623363613364306332636633 Jun 25 16:32:45.016217 containerd[1288]: time="2024-06-25T16:32:45.016149333Z" level=info msg="StartContainer for \"4f958169d5b6195a3b3ca3d0c2cf31abf35a73b3d008977b873823f5fc50360c\" returns successfully" Jun 25 16:32:46.188372 kubelet[2286]: I0625 16:32:46.187775 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-xcvvg" podStartSLOduration=2.8710195929999998 podCreationTimestamp="2024-06-25 16:32:39 +0000 UTC" firstStartedPulling="2024-06-25 16:32:40.502515726 +0000 UTC m=+14.502795218" lastFinishedPulling="2024-06-25 16:32:44.819196881 +0000 UTC m=+18.819476373" observedRunningTime="2024-06-25 16:32:45.28917196 +0000 UTC m=+19.289451482" watchObservedRunningTime="2024-06-25 16:32:46.187700748 +0000 UTC m=+20.187980240" Jun 25 16:32:48.389000 audit[2670]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.395056 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:32:48.395209 kernel: audit: type=1325 audit(1719333168.389:455): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.395257 kernel: audit: type=1300 audit(1719333168.389:455): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6064dcc0 a2=0 a3=7fff6064dcac items=0 ppid=2478 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.389000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6064dcc0 a2=0 a3=7fff6064dcac items=0 ppid=2478 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.403489 kernel: audit: type=1327 audit(1719333168.389:455): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.390000 audit[2670]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.430055 kernel: audit: type=1325 audit(1719333168.390:456): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.430193 kernel: audit: type=1300 audit(1719333168.390:456): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6064dcc0 a2=0 a3=0 items=0 ppid=2478 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.390000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6064dcc0 a2=0 a3=0 items=0 ppid=2478 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.442644 kernel: audit: type=1327 audit(1719333168.390:456): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.432000 audit[2672]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.432000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0150f070 a2=0 a3=7ffc0150f05c items=0 ppid=2478 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.461721 kernel: audit: type=1325 audit(1719333168.432:457): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.461902 kernel: audit: type=1300 audit(1719333168.432:457): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0150f070 a2=0 a3=7ffc0150f05c items=0 ppid=2478 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.461958 kernel: audit: type=1327 audit(1719333168.432:457): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.454000 audit[2672]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.454000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0150f070 a2=0 a3=0 items=0 ppid=2478 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:48.484924 kernel: audit: type=1325 audit(1719333168.454:458): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:48.628785 kubelet[2286]: I0625 16:32:48.628711 2286 topology_manager.go:215] "Topology Admit Handler" podUID="c03059c5-6e37-4b6c-8ae9-27808464f0b8" podNamespace="calico-system" podName="calico-typha-85cdbdb965-s572j" Jun 25 16:32:48.652113 systemd[1]: Created slice kubepods-besteffort-podc03059c5_6e37_4b6c_8ae9_27808464f0b8.slice - libcontainer container kubepods-besteffort-podc03059c5_6e37_4b6c_8ae9_27808464f0b8.slice. Jun 25 16:32:48.768352 kubelet[2286]: I0625 16:32:48.768304 2286 topology_manager.go:215] "Topology Admit Handler" podUID="91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" podNamespace="calico-system" podName="calico-node-77kqk" Jun 25 16:32:48.783767 systemd[1]: Created slice kubepods-besteffort-pod91e183b5_fd5e_4cbd_ac19_c8a8d84a7b2e.slice - libcontainer container kubepods-besteffort-pod91e183b5_fd5e_4cbd_ac19_c8a8d84a7b2e.slice. Jun 25 16:32:48.806221 kubelet[2286]: I0625 16:32:48.804346 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjftk\" (UniqueName: \"kubernetes.io/projected/c03059c5-6e37-4b6c-8ae9-27808464f0b8-kube-api-access-zjftk\") pod \"calico-typha-85cdbdb965-s572j\" (UID: \"c03059c5-6e37-4b6c-8ae9-27808464f0b8\") " pod="calico-system/calico-typha-85cdbdb965-s572j" Jun 25 16:32:48.806221 kubelet[2286]: I0625 16:32:48.804415 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c03059c5-6e37-4b6c-8ae9-27808464f0b8-typha-certs\") pod \"calico-typha-85cdbdb965-s572j\" (UID: \"c03059c5-6e37-4b6c-8ae9-27808464f0b8\") " pod="calico-system/calico-typha-85cdbdb965-s572j" Jun 25 16:32:48.806221 kubelet[2286]: I0625 16:32:48.804447 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03059c5-6e37-4b6c-8ae9-27808464f0b8-tigera-ca-bundle\") pod \"calico-typha-85cdbdb965-s572j\" (UID: \"c03059c5-6e37-4b6c-8ae9-27808464f0b8\") " pod="calico-system/calico-typha-85cdbdb965-s572j" Jun 25 16:32:48.906243 kubelet[2286]: I0625 16:32:48.905630 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-xtables-lock\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906243 kubelet[2286]: I0625 16:32:48.905678 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-flexvol-driver-host\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906243 kubelet[2286]: I0625 16:32:48.905721 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-net-dir\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906243 kubelet[2286]: I0625 16:32:48.905761 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-run-calico\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906243 kubelet[2286]: I0625 16:32:48.905789 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-tigera-ca-bundle\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906630 kubelet[2286]: I0625 16:32:48.905816 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-lib-calico\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906630 kubelet[2286]: I0625 16:32:48.905846 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-bin-dir\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906630 kubelet[2286]: I0625 16:32:48.905873 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-policysync\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906630 kubelet[2286]: I0625 16:32:48.905898 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-node-certs\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906630 kubelet[2286]: I0625 16:32:48.905922 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-log-dir\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.906821 kubelet[2286]: I0625 16:32:48.905970 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-lib-modules\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.919223 kubelet[2286]: I0625 16:32:48.915045 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k875l\" (UniqueName: \"kubernetes.io/projected/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-kube-api-access-k875l\") pod \"calico-node-77kqk\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " pod="calico-system/calico-node-77kqk" Jun 25 16:32:48.983474 kubelet[2286]: E0625 16:32:48.983441 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:48.997353 kubelet[2286]: I0625 16:32:48.997313 2286 topology_manager.go:215] "Topology Admit Handler" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" podNamespace="calico-system" podName="csi-node-driver-7xkz9" Jun 25 16:32:48.997666 kubelet[2286]: E0625 16:32:48.997649 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:49.004092 containerd[1288]: time="2024-06-25T16:32:49.004038079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85cdbdb965-s572j,Uid:c03059c5-6e37-4b6c-8ae9-27808464f0b8,Namespace:calico-system,Attempt:0,}" Jun 25 16:32:49.016547 kubelet[2286]: I0625 16:32:49.015525 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72bf43a2-ad8b-409f-8c68-9b745ebeb647-varrun\") pod \"csi-node-driver-7xkz9\" (UID: \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\") " pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:32:49.016547 kubelet[2286]: I0625 16:32:49.015600 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72bf43a2-ad8b-409f-8c68-9b745ebeb647-socket-dir\") pod \"csi-node-driver-7xkz9\" (UID: \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\") " pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:32:49.016547 kubelet[2286]: I0625 16:32:49.015698 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcft\" (UniqueName: \"kubernetes.io/projected/72bf43a2-ad8b-409f-8c68-9b745ebeb647-kube-api-access-qmcft\") pod \"csi-node-driver-7xkz9\" (UID: \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\") " pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:32:49.016547 kubelet[2286]: I0625 16:32:49.015797 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72bf43a2-ad8b-409f-8c68-9b745ebeb647-kubelet-dir\") pod \"csi-node-driver-7xkz9\" (UID: \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\") " pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:32:49.016547 kubelet[2286]: I0625 16:32:49.015899 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72bf43a2-ad8b-409f-8c68-9b745ebeb647-registration-dir\") pod \"csi-node-driver-7xkz9\" (UID: \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\") " pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:32:49.124739 kubelet[2286]: E0625 16:32:49.124692 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.124939 kubelet[2286]: W0625 16:32:49.124767 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.124939 kubelet[2286]: E0625 16:32:49.124810 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.126544 kubelet[2286]: E0625 16:32:49.126483 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.126544 kubelet[2286]: W0625 16:32:49.126508 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.126544 kubelet[2286]: E0625 16:32:49.126540 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.126964 kubelet[2286]: E0625 16:32:49.126895 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.126964 kubelet[2286]: W0625 16:32:49.126909 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.127661 kubelet[2286]: E0625 16:32:49.127365 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.129336 kubelet[2286]: E0625 16:32:49.128669 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.129336 kubelet[2286]: W0625 16:32:49.128684 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.129336 kubelet[2286]: E0625 16:32:49.128789 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.129336 kubelet[2286]: E0625 16:32:49.129000 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.129336 kubelet[2286]: W0625 16:32:49.129010 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.129336 kubelet[2286]: E0625 16:32:49.129131 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.129874 kubelet[2286]: E0625 16:32:49.129821 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.129874 kubelet[2286]: W0625 16:32:49.129840 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.130040 kubelet[2286]: E0625 16:32:49.129908 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.130424 kubelet[2286]: E0625 16:32:49.130240 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.130424 kubelet[2286]: W0625 16:32:49.130256 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.130424 kubelet[2286]: E0625 16:32:49.130356 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.131089 kubelet[2286]: E0625 16:32:49.130730 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.131089 kubelet[2286]: W0625 16:32:49.130759 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.131089 kubelet[2286]: E0625 16:32:49.130907 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.131426 kubelet[2286]: E0625 16:32:49.131376 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.131426 kubelet[2286]: W0625 16:32:49.131411 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.131530 kubelet[2286]: E0625 16:32:49.131504 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.131903 kubelet[2286]: E0625 16:32:49.131719 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.131903 kubelet[2286]: W0625 16:32:49.131782 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.131903 kubelet[2286]: E0625 16:32:49.131884 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.132660 kubelet[2286]: E0625 16:32:49.132438 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.132660 kubelet[2286]: W0625 16:32:49.132463 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.132660 kubelet[2286]: E0625 16:32:49.132521 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.133203 kubelet[2286]: E0625 16:32:49.133058 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.133203 kubelet[2286]: W0625 16:32:49.133077 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.133203 kubelet[2286]: E0625 16:32:49.133180 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.133455 kubelet[2286]: E0625 16:32:49.133427 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.133455 kubelet[2286]: W0625 16:32:49.133442 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.133583 kubelet[2286]: E0625 16:32:49.133564 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.133882 kubelet[2286]: E0625 16:32:49.133866 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.133929 kubelet[2286]: W0625 16:32:49.133882 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.134037 kubelet[2286]: E0625 16:32:49.134007 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.134199 kubelet[2286]: E0625 16:32:49.134182 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.134199 kubelet[2286]: W0625 16:32:49.134197 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.134310 kubelet[2286]: E0625 16:32:49.134294 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.134641 kubelet[2286]: E0625 16:32:49.134626 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.134641 kubelet[2286]: W0625 16:32:49.134641 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.134713 kubelet[2286]: E0625 16:32:49.134701 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.135065 kubelet[2286]: E0625 16:32:49.135035 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.135065 kubelet[2286]: W0625 16:32:49.135050 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.135149 kubelet[2286]: E0625 16:32:49.135100 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.135295 kubelet[2286]: E0625 16:32:49.135276 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.135295 kubelet[2286]: W0625 16:32:49.135288 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.135380 kubelet[2286]: E0625 16:32:49.135360 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.135487 kubelet[2286]: E0625 16:32:49.135471 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.135487 kubelet[2286]: W0625 16:32:49.135483 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.135550 kubelet[2286]: E0625 16:32:49.135501 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.135796 kubelet[2286]: E0625 16:32:49.135780 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.135796 kubelet[2286]: W0625 16:32:49.135794 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.135868 kubelet[2286]: E0625 16:32:49.135816 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.136095 kubelet[2286]: E0625 16:32:49.136051 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.136095 kubelet[2286]: W0625 16:32:49.136068 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.136178 kubelet[2286]: E0625 16:32:49.136112 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.136264 kubelet[2286]: E0625 16:32:49.136248 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.136264 kubelet[2286]: W0625 16:32:49.136261 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.136362 kubelet[2286]: E0625 16:32:49.136318 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.136563 kubelet[2286]: E0625 16:32:49.136514 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.136563 kubelet[2286]: W0625 16:32:49.136535 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.138269 kubelet[2286]: E0625 16:32:49.136634 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.141457 kubelet[2286]: E0625 16:32:49.140493 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.141457 kubelet[2286]: W0625 16:32:49.140517 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.141457 kubelet[2286]: E0625 16:32:49.140557 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.142160 kubelet[2286]: E0625 16:32:49.142132 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.142160 kubelet[2286]: W0625 16:32:49.142150 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.142243 kubelet[2286]: E0625 16:32:49.142172 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.142659 kubelet[2286]: E0625 16:32:49.142391 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.142720 kubelet[2286]: W0625 16:32:49.142665 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.142720 kubelet[2286]: E0625 16:32:49.142685 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.243336 kubelet[2286]: E0625 16:32:49.243092 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.243336 kubelet[2286]: W0625 16:32:49.243118 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.243336 kubelet[2286]: E0625 16:32:49.243156 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.247452 kubelet[2286]: E0625 16:32:49.246558 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.247452 kubelet[2286]: W0625 16:32:49.246577 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.247452 kubelet[2286]: E0625 16:32:49.246713 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.249987 kubelet[2286]: E0625 16:32:49.249261 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.249987 kubelet[2286]: W0625 16:32:49.249394 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.249987 kubelet[2286]: E0625 16:32:49.249477 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.250189 kubelet[2286]: E0625 16:32:49.250020 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:49.250189 kubelet[2286]: W0625 16:32:49.250050 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:49.250189 kubelet[2286]: E0625 16:32:49.250066 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:49.255173 containerd[1288]: time="2024-06-25T16:32:49.254381904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:49.255173 containerd[1288]: time="2024-06-25T16:32:49.254516947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:49.255173 containerd[1288]: time="2024-06-25T16:32:49.254554267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:49.255173 containerd[1288]: time="2024-06-25T16:32:49.254609761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:49.305035 systemd[1]: Started cri-containerd-966dc6f67a62b47a2a31568fdf45b16d1825e6af4cc1b2e26137fa43891c0e11.scope - libcontainer container 966dc6f67a62b47a2a31568fdf45b16d1825e6af4cc1b2e26137fa43891c0e11. Jun 25 16:32:49.352000 audit: BPF prog-id=111 op=LOAD Jun 25 16:32:49.353000 audit: BPF prog-id=112 op=LOAD Jun 25 16:32:49.353000 audit[2726]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2713 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:49.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936366463366636376136326234376132613331353638666466343562 Jun 25 16:32:49.353000 audit: BPF prog-id=113 op=LOAD Jun 25 16:32:49.353000 audit[2726]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2713 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:49.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936366463366636376136326234376132613331353638666466343562 Jun 25 16:32:49.353000 audit: BPF prog-id=113 op=UNLOAD Jun 25 16:32:49.353000 audit: BPF prog-id=112 op=UNLOAD Jun 25 16:32:49.353000 audit: BPF prog-id=114 op=LOAD Jun 25 16:32:49.353000 audit[2726]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2713 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:49.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936366463366636376136326234376132613331353638666466343562 Jun 25 16:32:49.389793 containerd[1288]: time="2024-06-25T16:32:49.389126358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85cdbdb965-s572j,Uid:c03059c5-6e37-4b6c-8ae9-27808464f0b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"966dc6f67a62b47a2a31568fdf45b16d1825e6af4cc1b2e26137fa43891c0e11\"" Jun 25 16:32:49.404439 kubelet[2286]: E0625 16:32:49.403829 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:49.405315 kubelet[2286]: E0625 16:32:49.404927 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:49.405953 containerd[1288]: time="2024-06-25T16:32:49.405899819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-77kqk,Uid:91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e,Namespace:calico-system,Attempt:0,}" Jun 25 16:32:49.407565 containerd[1288]: time="2024-06-25T16:32:49.407528015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:32:49.486000 audit[2748]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:49.486000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe48188c10 a2=0 a3=7ffe48188bfc items=0 ppid=2478 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:49.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:49.487000 audit[2748]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:49.487000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe48188c10 a2=0 a3=0 items=0 ppid=2478 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:49.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:50.133157 kubelet[2286]: E0625 16:32:50.132686 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:50.144581 containerd[1288]: time="2024-06-25T16:32:50.144477866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:32:50.144581 containerd[1288]: time="2024-06-25T16:32:50.144535264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:50.144581 containerd[1288]: time="2024-06-25T16:32:50.144549039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:32:50.144581 containerd[1288]: time="2024-06-25T16:32:50.144558718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:32:50.170197 systemd[1]: Started cri-containerd-be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da.scope - libcontainer container be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da. Jun 25 16:32:50.186000 audit: BPF prog-id=115 op=LOAD Jun 25 16:32:50.186000 audit: BPF prog-id=116 op=LOAD Jun 25 16:32:50.186000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2756 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:50.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265346339653566383064616232633139306562323534343037626133 Jun 25 16:32:50.186000 audit: BPF prog-id=117 op=LOAD Jun 25 16:32:50.186000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2756 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:50.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265346339653566383064616232633139306562323534343037626133 Jun 25 16:32:50.186000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:32:50.186000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:32:50.186000 audit: BPF prog-id=118 op=LOAD Jun 25 16:32:50.186000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2756 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:50.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265346339653566383064616232633139306562323534343037626133 Jun 25 16:32:50.203029 containerd[1288]: time="2024-06-25T16:32:50.202984038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-77kqk,Uid:91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\"" Jun 25 16:32:50.203991 kubelet[2286]: E0625 16:32:50.203808 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:50.931656 systemd[1]: run-containerd-runc-k8s.io-be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da-runc.QoJX6D.mount: Deactivated successfully. Jun 25 16:32:51.054982 kernel: hrtimer: interrupt took 4351749 ns Jun 25 16:32:52.133862 kubelet[2286]: E0625 16:32:52.133315 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:54.137295 kubelet[2286]: E0625 16:32:54.132863 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:55.414531 containerd[1288]: time="2024-06-25T16:32:55.414466308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:55.469966 containerd[1288]: time="2024-06-25T16:32:55.469881717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:32:55.542157 containerd[1288]: time="2024-06-25T16:32:55.542061334Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:55.554502 containerd[1288]: time="2024-06-25T16:32:55.554446366Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:55.560631 containerd[1288]: time="2024-06-25T16:32:55.560558546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:55.561843 containerd[1288]: time="2024-06-25T16:32:55.561647660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 6.154078888s" Jun 25 16:32:55.561962 containerd[1288]: time="2024-06-25T16:32:55.561937503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:32:55.573252 containerd[1288]: time="2024-06-25T16:32:55.573210247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:32:55.589962 containerd[1288]: time="2024-06-25T16:32:55.589907910Z" level=info msg="CreateContainer within sandbox \"966dc6f67a62b47a2a31568fdf45b16d1825e6af4cc1b2e26137fa43891c0e11\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:32:55.734293 containerd[1288]: time="2024-06-25T16:32:55.734218394Z" level=info msg="CreateContainer within sandbox \"966dc6f67a62b47a2a31568fdf45b16d1825e6af4cc1b2e26137fa43891c0e11\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17f849c3628baa477348151fe10f41ed7e97095d1f216da2185e4341468a2b20\"" Jun 25 16:32:55.737967 containerd[1288]: time="2024-06-25T16:32:55.735151095Z" level=info msg="StartContainer for \"17f849c3628baa477348151fe10f41ed7e97095d1f216da2185e4341468a2b20\"" Jun 25 16:32:55.799121 systemd[1]: Started cri-containerd-17f849c3628baa477348151fe10f41ed7e97095d1f216da2185e4341468a2b20.scope - libcontainer container 17f849c3628baa477348151fe10f41ed7e97095d1f216da2185e4341468a2b20. Jun 25 16:32:55.840000 audit: BPF prog-id=119 op=LOAD Jun 25 16:32:55.847159 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:32:55.847308 kernel: audit: type=1334 audit(1719333175.840:473): prog-id=119 op=LOAD Jun 25 16:32:55.841000 audit: BPF prog-id=120 op=LOAD Jun 25 16:32:55.850338 kernel: audit: type=1334 audit(1719333175.841:474): prog-id=120 op=LOAD Jun 25 16:32:55.856872 kernel: audit: type=1300 audit(1719333175.841:474): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2713 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.841000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2713 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137663834396333363238626161343737333438313531666531306634 Jun 25 16:32:55.863266 kernel: audit: type=1327 audit(1719333175.841:474): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137663834396333363238626161343737333438313531666531306634 Jun 25 16:32:55.841000 audit: BPF prog-id=121 op=LOAD Jun 25 16:32:55.884826 kernel: audit: type=1334 audit(1719333175.841:475): prog-id=121 op=LOAD Jun 25 16:32:55.841000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2713 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137663834396333363238626161343737333438313531666531306634 Jun 25 16:32:55.896606 kernel: audit: type=1300 audit(1719333175.841:475): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2713 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.896790 kernel: audit: type=1327 audit(1719333175.841:475): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137663834396333363238626161343737333438313531666531306634 Jun 25 16:32:55.896841 kernel: audit: type=1334 audit(1719333175.841:476): prog-id=121 op=UNLOAD Jun 25 16:32:55.841000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:32:55.897917 kernel: audit: type=1334 audit(1719333175.841:477): prog-id=120 op=UNLOAD Jun 25 16:32:55.841000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:32:55.899426 kernel: audit: type=1334 audit(1719333175.841:478): prog-id=122 op=LOAD Jun 25 16:32:55.841000 audit: BPF prog-id=122 op=LOAD Jun 25 16:32:55.841000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2713 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137663834396333363238626161343737333438313531666531306634 Jun 25 16:32:56.031921 containerd[1288]: time="2024-06-25T16:32:56.031708223Z" level=info msg="StartContainer for \"17f849c3628baa477348151fe10f41ed7e97095d1f216da2185e4341468a2b20\" returns successfully" Jun 25 16:32:56.136309 kubelet[2286]: E0625 16:32:56.134150 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:56.304976 kubelet[2286]: E0625 16:32:56.304491 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:56.319640 kubelet[2286]: E0625 16:32:56.319567 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.319640 kubelet[2286]: W0625 16:32:56.319593 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.319640 kubelet[2286]: E0625 16:32:56.319623 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.320081 kubelet[2286]: E0625 16:32:56.320020 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.320081 kubelet[2286]: W0625 16:32:56.320036 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.320081 kubelet[2286]: E0625 16:32:56.320049 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.320862 kubelet[2286]: E0625 16:32:56.320820 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.320862 kubelet[2286]: W0625 16:32:56.320840 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.320862 kubelet[2286]: E0625 16:32:56.320854 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.321165 kubelet[2286]: E0625 16:32:56.321118 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.321165 kubelet[2286]: W0625 16:32:56.321154 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.321242 kubelet[2286]: E0625 16:32:56.321193 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.323505 kubelet[2286]: E0625 16:32:56.323473 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.323505 kubelet[2286]: W0625 16:32:56.323493 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.323505 kubelet[2286]: E0625 16:32:56.323515 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.324018 kubelet[2286]: E0625 16:32:56.323813 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.324018 kubelet[2286]: W0625 16:32:56.323835 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.324018 kubelet[2286]: E0625 16:32:56.323853 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.324298 kubelet[2286]: E0625 16:32:56.324147 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.324298 kubelet[2286]: W0625 16:32:56.324157 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.324298 kubelet[2286]: E0625 16:32:56.324169 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.324495 kubelet[2286]: E0625 16:32:56.324487 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.324553 kubelet[2286]: W0625 16:32:56.324544 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.324649 kubelet[2286]: E0625 16:32:56.324641 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.324914 kubelet[2286]: E0625 16:32:56.324905 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.325102 kubelet[2286]: W0625 16:32:56.324984 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.325102 kubelet[2286]: E0625 16:32:56.325006 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.325329 kubelet[2286]: E0625 16:32:56.325127 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.325329 kubelet[2286]: W0625 16:32:56.325134 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.325329 kubelet[2286]: E0625 16:32:56.325144 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.329863 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.334554 kubelet[2286]: W0625 16:32:56.329896 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.329927 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.332061 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.334554 kubelet[2286]: W0625 16:32:56.332080 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.332122 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.332438 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.334554 kubelet[2286]: W0625 16:32:56.332447 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.332460 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.334554 kubelet[2286]: E0625 16:32:56.332673 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.335109 kubelet[2286]: W0625 16:32:56.332684 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.335109 kubelet[2286]: E0625 16:32:56.332716 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.335109 kubelet[2286]: E0625 16:32:56.332930 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.335109 kubelet[2286]: W0625 16:32:56.332938 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.335109 kubelet[2286]: E0625 16:32:56.332950 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.340406 kubelet[2286]: E0625 16:32:56.340356 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.340406 kubelet[2286]: W0625 16:32:56.340394 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.340406 kubelet[2286]: E0625 16:32:56.340422 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.340665 kubelet[2286]: E0625 16:32:56.340652 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.340665 kubelet[2286]: W0625 16:32:56.340662 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.340727 kubelet[2286]: E0625 16:32:56.340675 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.341060 kubelet[2286]: E0625 16:32:56.340906 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.341060 kubelet[2286]: W0625 16:32:56.340921 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.341060 kubelet[2286]: E0625 16:32:56.340933 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.341168 kubelet[2286]: E0625 16:32:56.341131 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.341168 kubelet[2286]: W0625 16:32:56.341140 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.341168 kubelet[2286]: E0625 16:32:56.341157 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.341600 kubelet[2286]: E0625 16:32:56.341399 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.341600 kubelet[2286]: W0625 16:32:56.341424 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.341600 kubelet[2286]: E0625 16:32:56.341457 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.342324 kubelet[2286]: E0625 16:32:56.341738 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.342324 kubelet[2286]: W0625 16:32:56.341789 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.342324 kubelet[2286]: E0625 16:32:56.342034 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.342324 kubelet[2286]: E0625 16:32:56.342241 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.342324 kubelet[2286]: W0625 16:32:56.342250 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.342487 kubelet[2286]: E0625 16:32:56.342345 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.342487 kubelet[2286]: I0625 16:32:56.342449 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-85cdbdb965-s572j" podStartSLOduration=2.184382903 podCreationTimestamp="2024-06-25 16:32:48 +0000 UTC" firstStartedPulling="2024-06-25 16:32:49.406436496 +0000 UTC m=+23.406715988" lastFinishedPulling="2024-06-25 16:32:55.564460317 +0000 UTC m=+29.564739819" observedRunningTime="2024-06-25 16:32:56.33890869 +0000 UTC m=+30.339188192" watchObservedRunningTime="2024-06-25 16:32:56.342406734 +0000 UTC m=+30.342686236" Jun 25 16:32:56.342774 kubelet[2286]: E0625 16:32:56.342647 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.342774 kubelet[2286]: W0625 16:32:56.342659 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.342774 kubelet[2286]: E0625 16:32:56.342676 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.343080 kubelet[2286]: E0625 16:32:56.342995 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.343080 kubelet[2286]: W0625 16:32:56.343005 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.343080 kubelet[2286]: E0625 16:32:56.343019 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.345348 kubelet[2286]: E0625 16:32:56.345164 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.345348 kubelet[2286]: W0625 16:32:56.345176 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.345348 kubelet[2286]: E0625 16:32:56.345286 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.346259 kubelet[2286]: E0625 16:32:56.346214 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.346259 kubelet[2286]: W0625 16:32:56.346246 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.346451 kubelet[2286]: E0625 16:32:56.346401 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.346824 kubelet[2286]: E0625 16:32:56.346532 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.346824 kubelet[2286]: W0625 16:32:56.346542 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.346824 kubelet[2286]: E0625 16:32:56.346685 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.347841 kubelet[2286]: E0625 16:32:56.347035 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.347841 kubelet[2286]: W0625 16:32:56.347051 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.347841 kubelet[2286]: E0625 16:32:56.347082 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.347841 kubelet[2286]: E0625 16:32:56.347435 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.347841 kubelet[2286]: W0625 16:32:56.347444 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.347841 kubelet[2286]: E0625 16:32:56.347456 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.350320 kubelet[2286]: E0625 16:32:56.349035 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.350320 kubelet[2286]: W0625 16:32:56.349048 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.350320 kubelet[2286]: E0625 16:32:56.349075 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.350320 kubelet[2286]: E0625 16:32:56.349280 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.350320 kubelet[2286]: W0625 16:32:56.349289 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.350320 kubelet[2286]: E0625 16:32:56.349316 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.351245 kubelet[2286]: E0625 16:32:56.350830 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.351245 kubelet[2286]: W0625 16:32:56.350846 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.351245 kubelet[2286]: E0625 16:32:56.351076 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:56.351245 kubelet[2286]: E0625 16:32:56.351244 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:56.351372 kubelet[2286]: W0625 16:32:56.351253 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:56.351372 kubelet[2286]: E0625 16:32:56.351284 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.330777 kubelet[2286]: I0625 16:32:57.327231 2286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:32:57.331895 kubelet[2286]: E0625 16:32:57.331855 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:57.369250 kubelet[2286]: E0625 16:32:57.369025 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.369250 kubelet[2286]: W0625 16:32:57.369053 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.369250 kubelet[2286]: E0625 16:32:57.369088 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.374099 kubelet[2286]: E0625 16:32:57.374062 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.374302 kubelet[2286]: W0625 16:32:57.374280 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.374399 kubelet[2286]: E0625 16:32:57.374385 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.379226 kubelet[2286]: E0625 16:32:57.379170 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.379226 kubelet[2286]: W0625 16:32:57.379199 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.379226 kubelet[2286]: E0625 16:32:57.379227 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.379871 kubelet[2286]: E0625 16:32:57.379625 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.379871 kubelet[2286]: W0625 16:32:57.379646 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.379871 kubelet[2286]: E0625 16:32:57.379660 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.380627 kubelet[2286]: E0625 16:32:57.380494 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.380627 kubelet[2286]: W0625 16:32:57.380512 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.380627 kubelet[2286]: E0625 16:32:57.380534 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.381274 kubelet[2286]: E0625 16:32:57.381241 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.381274 kubelet[2286]: W0625 16:32:57.381257 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.381274 kubelet[2286]: E0625 16:32:57.381272 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.381786 kubelet[2286]: E0625 16:32:57.381651 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.381786 kubelet[2286]: W0625 16:32:57.381667 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.381786 kubelet[2286]: E0625 16:32:57.381683 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.382419 kubelet[2286]: E0625 16:32:57.382359 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.382419 kubelet[2286]: W0625 16:32:57.382376 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.382419 kubelet[2286]: E0625 16:32:57.382390 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.383650 kubelet[2286]: E0625 16:32:57.383610 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.383650 kubelet[2286]: W0625 16:32:57.383627 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.383650 kubelet[2286]: E0625 16:32:57.383641 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.384732 kubelet[2286]: E0625 16:32:57.384163 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.384732 kubelet[2286]: W0625 16:32:57.384186 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.384732 kubelet[2286]: E0625 16:32:57.384222 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.384732 kubelet[2286]: E0625 16:32:57.384518 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.384732 kubelet[2286]: W0625 16:32:57.384527 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.384732 kubelet[2286]: E0625 16:32:57.384543 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.385401 kubelet[2286]: E0625 16:32:57.385100 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.385401 kubelet[2286]: W0625 16:32:57.385121 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.385401 kubelet[2286]: E0625 16:32:57.385135 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.386436 kubelet[2286]: E0625 16:32:57.385686 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.386436 kubelet[2286]: W0625 16:32:57.385697 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.386436 kubelet[2286]: E0625 16:32:57.385711 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.386850 kubelet[2286]: E0625 16:32:57.386643 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.386850 kubelet[2286]: W0625 16:32:57.386659 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.386850 kubelet[2286]: E0625 16:32:57.386676 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.389729 kubelet[2286]: E0625 16:32:57.388134 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.389729 kubelet[2286]: W0625 16:32:57.388146 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.389729 kubelet[2286]: E0625 16:32:57.388163 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.390129 kubelet[2286]: E0625 16:32:57.390109 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.390129 kubelet[2286]: W0625 16:32:57.390128 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.390203 kubelet[2286]: E0625 16:32:57.390144 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.393679 kubelet[2286]: E0625 16:32:57.393479 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.393679 kubelet[2286]: W0625 16:32:57.393502 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.393679 kubelet[2286]: E0625 16:32:57.393525 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.402638 kubelet[2286]: E0625 16:32:57.401101 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.402638 kubelet[2286]: W0625 16:32:57.401128 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.402638 kubelet[2286]: E0625 16:32:57.401162 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.402638 kubelet[2286]: E0625 16:32:57.402421 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.402638 kubelet[2286]: W0625 16:32:57.402429 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.402638 kubelet[2286]: E0625 16:32:57.402549 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.410924 kubelet[2286]: E0625 16:32:57.410883 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.410924 kubelet[2286]: W0625 16:32:57.410914 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.411108 kubelet[2286]: E0625 16:32:57.411086 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.411324 kubelet[2286]: E0625 16:32:57.411306 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.411324 kubelet[2286]: W0625 16:32:57.411321 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.411429 kubelet[2286]: E0625 16:32:57.411410 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.411629 kubelet[2286]: E0625 16:32:57.411610 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.411629 kubelet[2286]: W0625 16:32:57.411624 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.411728 kubelet[2286]: E0625 16:32:57.411713 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.411851 kubelet[2286]: E0625 16:32:57.411836 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.411851 kubelet[2286]: W0625 16:32:57.411848 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.411927 kubelet[2286]: E0625 16:32:57.411861 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.412051 kubelet[2286]: E0625 16:32:57.412035 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.412051 kubelet[2286]: W0625 16:32:57.412050 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.412118 kubelet[2286]: E0625 16:32:57.412063 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.412294 kubelet[2286]: E0625 16:32:57.412278 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.412294 kubelet[2286]: W0625 16:32:57.412288 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.412380 kubelet[2286]: E0625 16:32:57.412299 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.412699 kubelet[2286]: E0625 16:32:57.412682 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.412699 kubelet[2286]: W0625 16:32:57.412695 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.412791 kubelet[2286]: E0625 16:32:57.412733 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.418914 kubelet[2286]: E0625 16:32:57.418870 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.418914 kubelet[2286]: W0625 16:32:57.418902 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.419158 kubelet[2286]: E0625 16:32:57.419061 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.419429 kubelet[2286]: E0625 16:32:57.419410 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.419429 kubelet[2286]: W0625 16:32:57.419425 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.419493 kubelet[2286]: E0625 16:32:57.419454 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.419814 kubelet[2286]: E0625 16:32:57.419797 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.419814 kubelet[2286]: W0625 16:32:57.419812 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.419903 kubelet[2286]: E0625 16:32:57.419832 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.420111 kubelet[2286]: E0625 16:32:57.420067 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.420155 kubelet[2286]: W0625 16:32:57.420111 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.420155 kubelet[2286]: E0625 16:32:57.420124 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.425759 kubelet[2286]: E0625 16:32:57.425413 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.425759 kubelet[2286]: W0625 16:32:57.425455 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.425759 kubelet[2286]: E0625 16:32:57.425502 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.425759 kubelet[2286]: E0625 16:32:57.425990 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.425759 kubelet[2286]: W0625 16:32:57.425998 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.425759 kubelet[2286]: E0625 16:32:57.426012 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.426566 kubelet[2286]: E0625 16:32:57.426546 2286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:32:57.426566 kubelet[2286]: W0625 16:32:57.426562 2286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:32:57.426639 kubelet[2286]: E0625 16:32:57.426576 2286 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:32:57.971238 containerd[1288]: time="2024-06-25T16:32:57.971169440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:57.975891 containerd[1288]: time="2024-06-25T16:32:57.975810328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:32:57.998780 containerd[1288]: time="2024-06-25T16:32:57.998695949Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:58.008315 containerd[1288]: time="2024-06-25T16:32:58.008273759Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:58.015055 containerd[1288]: time="2024-06-25T16:32:58.015004828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:32:58.015891 containerd[1288]: time="2024-06-25T16:32:58.015852969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.442417529s" Jun 25 16:32:58.016005 containerd[1288]: time="2024-06-25T16:32:58.015983904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:32:58.020240 containerd[1288]: time="2024-06-25T16:32:58.020202670Z" level=info msg="CreateContainer within sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:32:58.052535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80825844.mount: Deactivated successfully. Jun 25 16:32:58.115346 containerd[1288]: time="2024-06-25T16:32:58.107369502Z" level=info msg="CreateContainer within sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\"" Jun 25 16:32:58.115346 containerd[1288]: time="2024-06-25T16:32:58.112272051Z" level=info msg="StartContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\"" Jun 25 16:32:58.134208 kubelet[2286]: E0625 16:32:58.133559 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:32:58.186788 systemd[1]: Started cri-containerd-36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f.scope - libcontainer container 36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f. Jun 25 16:32:58.208000 audit: BPF prog-id=123 op=LOAD Jun 25 16:32:58.208000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2756 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:58.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613334346463396430306161393436353861386139326234306537 Jun 25 16:32:58.208000 audit: BPF prog-id=124 op=LOAD Jun 25 16:32:58.208000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2756 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:58.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613334346463396430306161393436353861386139326234306537 Jun 25 16:32:58.208000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:32:58.208000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:32:58.208000 audit: BPF prog-id=125 op=LOAD Jun 25 16:32:58.208000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2756 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:58.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613334346463396430306161393436353861386139326234306537 Jun 25 16:32:58.244367 containerd[1288]: time="2024-06-25T16:32:58.244309220Z" level=info msg="StartContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" returns successfully" Jun 25 16:32:58.246841 systemd[1]: cri-containerd-36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f.scope: Deactivated successfully. Jun 25 16:32:58.260000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:32:58.334549 containerd[1288]: time="2024-06-25T16:32:58.334480309Z" level=info msg="StopContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" with timeout 5 (s)" Jun 25 16:32:58.468957 kubelet[2286]: I0625 16:32:58.442360 2286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:32:58.468957 kubelet[2286]: E0625 16:32:58.443023 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:58.685000 audit[2956]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:58.685000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc0760d1e0 a2=0 a3=7ffc0760d1cc items=0 ppid=2478 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:58.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:58.686000 audit[2956]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:58.686000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc0760d1e0 a2=0 a3=7ffc0760d1cc items=0 ppid=2478 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:58.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:58.766365 containerd[1288]: time="2024-06-25T16:32:58.763516470Z" level=info msg="Stop container \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" with signal terminated" Jun 25 16:32:58.766365 containerd[1288]: time="2024-06-25T16:32:58.764068526Z" level=info msg="shim disconnected" id=36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f namespace=k8s.io Jun 25 16:32:58.766365 containerd[1288]: time="2024-06-25T16:32:58.764113440Z" level=warning msg="cleaning up after shim disconnected" id=36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f namespace=k8s.io Jun 25 16:32:58.766365 containerd[1288]: time="2024-06-25T16:32:58.764122316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:32:58.804161 containerd[1288]: time="2024-06-25T16:32:58.802897657Z" level=info msg="StopContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" returns successfully" Jun 25 16:32:58.804161 containerd[1288]: time="2024-06-25T16:32:58.803613270Z" level=info msg="StopPodSandbox for \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\"" Jun 25 16:32:58.804161 containerd[1288]: time="2024-06-25T16:32:58.803672862Z" level=info msg="Container to stop \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:32:58.817757 systemd[1]: cri-containerd-be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da.scope: Deactivated successfully. Jun 25 16:32:58.816000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:32:58.822000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:32:58.900920 containerd[1288]: time="2024-06-25T16:32:58.899701711Z" level=info msg="shim disconnected" id=be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da namespace=k8s.io Jun 25 16:32:58.900920 containerd[1288]: time="2024-06-25T16:32:58.899794165Z" level=warning msg="cleaning up after shim disconnected" id=be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da namespace=k8s.io Jun 25 16:32:58.900920 containerd[1288]: time="2024-06-25T16:32:58.899804284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:32:58.927544 containerd[1288]: time="2024-06-25T16:32:58.926756642Z" level=info msg="TearDown network for sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" successfully" Jun 25 16:32:58.927544 containerd[1288]: time="2024-06-25T16:32:58.926803530Z" level=info msg="StopPodSandbox for \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" returns successfully" Jun 25 16:32:59.050042 systemd[1]: run-containerd-runc-k8s.io-36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f-runc.ZVrl4r.mount: Deactivated successfully. Jun 25 16:32:59.050159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f-rootfs.mount: Deactivated successfully. Jun 25 16:32:59.050234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da-rootfs.mount: Deactivated successfully. Jun 25 16:32:59.050300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da-shm.mount: Deactivated successfully. Jun 25 16:32:59.126701 kubelet[2286]: I0625 16:32:59.126336 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-lib-modules\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126769 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-tigera-ca-bundle\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126797 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-xtables-lock\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126826 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-flexvol-driver-host\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126853 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-bin-dir\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126876 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-policysync\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.126948 kubelet[2286]: I0625 16:32:59.126906 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k875l\" (UniqueName: \"kubernetes.io/projected/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-kube-api-access-k875l\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.126931 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-run-calico\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.126960 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-log-dir\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.126982 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-lib-calico\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.127012 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-node-certs\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.127038 2286 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-net-dir\") pod \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\" (UID: \"91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e\") " Jun 25 16:32:59.127116 kubelet[2286]: I0625 16:32:59.127082 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127272 kubelet[2286]: I0625 16:32:59.126452 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127595 kubelet[2286]: I0625 16:32:59.127570 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:32:59.127907 kubelet[2286]: I0625 16:32:59.127616 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127907 kubelet[2286]: I0625 16:32:59.127636 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127907 kubelet[2286]: I0625 16:32:59.127654 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127907 kubelet[2286]: I0625 16:32:59.127715 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.127907 kubelet[2286]: I0625 16:32:59.127775 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.128127 kubelet[2286]: I0625 16:32:59.127807 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.128127 kubelet[2286]: I0625 16:32:59.127825 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-policysync" (OuterVolumeSpecName: "policysync") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:32:59.135493 systemd[1]: var-lib-kubelet-pods-91e183b5\x2dfd5e\x2d4cbd\x2dac19\x2dc8a8d84a7b2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk875l.mount: Deactivated successfully. Jun 25 16:32:59.141266 kubelet[2286]: I0625 16:32:59.139344 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-kube-api-access-k875l" (OuterVolumeSpecName: "kube-api-access-k875l") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "kube-api-access-k875l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:32:59.144340 systemd[1]: var-lib-kubelet-pods-91e183b5\x2dfd5e\x2d4cbd\x2dac19\x2dc8a8d84a7b2e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:32:59.146632 kubelet[2286]: I0625 16:32:59.145987 2286 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-node-certs" (OuterVolumeSpecName: "node-certs") pod "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" (UID: "91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:32:59.230014 kubelet[2286]: I0625 16:32:59.228034 2286 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.230014 kubelet[2286]: I0625 16:32:59.229307 2286 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238100 kubelet[2286]: I0625 16:32:59.231945 2286 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238575 2286 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238606 2286 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238620 2286 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-policysync\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238644 2286 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k875l\" (UniqueName: \"kubernetes.io/projected/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-kube-api-access-k875l\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238659 2286 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238673 2286 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238686 2286 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-node-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.238894 kubelet[2286]: I0625 16:32:59.238699 2286 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.239206 kubelet[2286]: I0625 16:32:59.238712 2286 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 16:32:59.351849 kubelet[2286]: E0625 16:32:59.351105 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:32:59.356160 kubelet[2286]: I0625 16:32:59.353536 2286 scope.go:117] "RemoveContainer" containerID="36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f" Jun 25 16:32:59.359032 containerd[1288]: time="2024-06-25T16:32:59.358979196Z" level=info msg="RemoveContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\"" Jun 25 16:32:59.383396 systemd[1]: Removed slice kubepods-besteffort-pod91e183b5_fd5e_4cbd_ac19_c8a8d84a7b2e.slice - libcontainer container kubepods-besteffort-pod91e183b5_fd5e_4cbd_ac19_c8a8d84a7b2e.slice. Jun 25 16:32:59.405219 containerd[1288]: time="2024-06-25T16:32:59.405093682Z" level=info msg="RemoveContainer for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" returns successfully" Jun 25 16:32:59.405937 kubelet[2286]: I0625 16:32:59.405905 2286 scope.go:117] "RemoveContainer" containerID="36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f" Jun 25 16:32:59.406893 containerd[1288]: time="2024-06-25T16:32:59.406682042Z" level=error msg="ContainerStatus for \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\": not found" Jun 25 16:32:59.407244 kubelet[2286]: E0625 16:32:59.407047 2286 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\": not found" containerID="36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f" Jun 25 16:32:59.407244 kubelet[2286]: I0625 16:32:59.407118 2286 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f"} err="failed to get container status \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\": rpc error: code = NotFound desc = an error occurred when try to find container \"36a344dc9d00aa94658a8a92b40e77a336bc12a4e2b0f237b0c50aac6561013f\": not found" Jun 25 16:32:59.454696 kubelet[2286]: I0625 16:32:59.453677 2286 topology_manager.go:215] "Topology Admit Handler" podUID="131b3c30-7d05-4bae-ad9e-37d5042d3a05" podNamespace="calico-system" podName="calico-node-9bzkd" Jun 25 16:32:59.454696 kubelet[2286]: E0625 16:32:59.453798 2286 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" containerName="flexvol-driver" Jun 25 16:32:59.454696 kubelet[2286]: I0625 16:32:59.453845 2286 memory_manager.go:346] "RemoveStaleState removing state" podUID="91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" containerName="flexvol-driver" Jun 25 16:32:59.461897 systemd[1]: Created slice kubepods-besteffort-pod131b3c30_7d05_4bae_ad9e_37d5042d3a05.slice - libcontainer container kubepods-besteffort-pod131b3c30_7d05_4bae_ad9e_37d5042d3a05.slice. Jun 25 16:32:59.647520 kubelet[2286]: I0625 16:32:59.646728 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-policysync\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.647520 kubelet[2286]: I0625 16:32:59.646803 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-var-run-calico\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.647520 kubelet[2286]: I0625 16:32:59.646828 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-cni-log-dir\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.647520 kubelet[2286]: I0625 16:32:59.646852 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-var-lib-calico\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.647520 kubelet[2286]: I0625 16:32:59.646876 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/131b3c30-7d05-4bae-ad9e-37d5042d3a05-tigera-ca-bundle\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648156 kubelet[2286]: I0625 16:32:59.646899 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-lib-modules\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648156 kubelet[2286]: I0625 16:32:59.646923 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-cni-bin-dir\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648156 kubelet[2286]: I0625 16:32:59.646949 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6842\" (UniqueName: \"kubernetes.io/projected/131b3c30-7d05-4bae-ad9e-37d5042d3a05-kube-api-access-x6842\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648156 kubelet[2286]: I0625 16:32:59.646974 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-xtables-lock\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648156 kubelet[2286]: I0625 16:32:59.646999 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-cni-net-dir\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648309 kubelet[2286]: I0625 16:32:59.647045 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/131b3c30-7d05-4bae-ad9e-37d5042d3a05-node-certs\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:32:59.648309 kubelet[2286]: I0625 16:32:59.647070 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/131b3c30-7d05-4bae-ad9e-37d5042d3a05-flexvol-driver-host\") pod \"calico-node-9bzkd\" (UID: \"131b3c30-7d05-4bae-ad9e-37d5042d3a05\") " pod="calico-system/calico-node-9bzkd" Jun 25 16:33:00.069615 kubelet[2286]: E0625 16:33:00.069051 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:00.071565 containerd[1288]: time="2024-06-25T16:33:00.070109958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bzkd,Uid:131b3c30-7d05-4bae-ad9e-37d5042d3a05,Namespace:calico-system,Attempt:0,}" Jun 25 16:33:00.127145 containerd[1288]: time="2024-06-25T16:33:00.126048780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:00.127145 containerd[1288]: time="2024-06-25T16:33:00.126116207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:00.127145 containerd[1288]: time="2024-06-25T16:33:00.126143909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:00.127145 containerd[1288]: time="2024-06-25T16:33:00.126159799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:00.137796 kubelet[2286]: E0625 16:33:00.133604 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:00.148582 kubelet[2286]: I0625 16:33:00.148539 2286 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e" path="/var/lib/kubelet/pods/91e183b5-fd5e-4cbd-ac19-c8a8d84a7b2e/volumes" Jun 25 16:33:00.169578 systemd[1]: Started cri-containerd-bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc.scope - libcontainer container bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc. Jun 25 16:33:00.187000 audit: BPF prog-id=126 op=LOAD Jun 25 16:33:00.188000 audit: BPF prog-id=127 op=LOAD Jun 25 16:33:00.188000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3009 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263356231306562633730353963346164363066633363373439653265 Jun 25 16:33:00.188000 audit: BPF prog-id=128 op=LOAD Jun 25 16:33:00.188000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3009 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263356231306562633730353963346164363066633363373439653265 Jun 25 16:33:00.188000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:33:00.188000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:33:00.188000 audit: BPF prog-id=129 op=LOAD Jun 25 16:33:00.188000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3009 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263356231306562633730353963346164363066633363373439653265 Jun 25 16:33:00.216001 containerd[1288]: time="2024-06-25T16:33:00.215784476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bzkd,Uid:131b3c30-7d05-4bae-ad9e-37d5042d3a05,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\"" Jun 25 16:33:00.218464 kubelet[2286]: E0625 16:33:00.218211 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:00.221548 containerd[1288]: time="2024-06-25T16:33:00.221495630Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:33:00.274957 containerd[1288]: time="2024-06-25T16:33:00.274597199Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5\"" Jun 25 16:33:00.277507 containerd[1288]: time="2024-06-25T16:33:00.275430452Z" level=info msg="StartContainer for \"b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5\"" Jun 25 16:33:00.346613 systemd[1]: Started cri-containerd-b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5.scope - libcontainer container b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5. Jun 25 16:33:00.382000 audit: BPF prog-id=130 op=LOAD Jun 25 16:33:00.382000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3009 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333239656535316235303135353539333762613034353737346435 Jun 25 16:33:00.382000 audit: BPF prog-id=131 op=LOAD Jun 25 16:33:00.382000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3009 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333239656535316235303135353539333762613034353737346435 Jun 25 16:33:00.382000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:33:00.382000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:33:00.382000 audit: BPF prog-id=132 op=LOAD Jun 25 16:33:00.382000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3009 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333239656535316235303135353539333762613034353737346435 Jun 25 16:33:00.435279 containerd[1288]: time="2024-06-25T16:33:00.428774859Z" level=info msg="StartContainer for \"b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5\" returns successfully" Jun 25 16:33:00.443229 systemd[1]: cri-containerd-b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5.scope: Deactivated successfully. Jun 25 16:33:00.453000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:33:00.531374 containerd[1288]: time="2024-06-25T16:33:00.531254031Z" level=info msg="shim disconnected" id=b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5 namespace=k8s.io Jun 25 16:33:00.531374 containerd[1288]: time="2024-06-25T16:33:00.531341976Z" level=warning msg="cleaning up after shim disconnected" id=b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5 namespace=k8s.io Jun 25 16:33:00.531374 containerd[1288]: time="2024-06-25T16:33:00.531360993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:33:01.097642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4329ee51b501555937ba045774d5c62edffc9ba7794d0ee842ebc7316dfdda5-rootfs.mount: Deactivated successfully. Jun 25 16:33:01.374859 kubelet[2286]: E0625 16:33:01.372152 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:01.392889 containerd[1288]: time="2024-06-25T16:33:01.380140554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:33:02.131839 kubelet[2286]: E0625 16:33:02.131794 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:04.140715 kubelet[2286]: E0625 16:33:04.139592 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:06.133621 kubelet[2286]: E0625 16:33:06.133582 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:08.131645 kubelet[2286]: E0625 16:33:08.131587 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:08.270770 containerd[1288]: time="2024-06-25T16:33:08.269913870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:08.341413 containerd[1288]: time="2024-06-25T16:33:08.341307371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:33:08.397379 containerd[1288]: time="2024-06-25T16:33:08.397032383Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:08.550521 containerd[1288]: time="2024-06-25T16:33:08.550444538Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:08.621640 containerd[1288]: time="2024-06-25T16:33:08.621572607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:08.622872 containerd[1288]: time="2024-06-25T16:33:08.622595663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 7.242398402s" Jun 25 16:33:08.622872 containerd[1288]: time="2024-06-25T16:33:08.622657983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:33:08.625283 containerd[1288]: time="2024-06-25T16:33:08.624987101Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:33:10.036649 containerd[1288]: time="2024-06-25T16:33:10.036586467Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d\"" Jun 25 16:33:10.037168 containerd[1288]: time="2024-06-25T16:33:10.037140154Z" level=info msg="StartContainer for \"6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d\"" Jun 25 16:33:10.063988 systemd[1]: Started cri-containerd-6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d.scope - libcontainer container 6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d. Jun 25 16:33:10.133201 kubelet[2286]: E0625 16:33:10.131295 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:10.141000 audit: BPF prog-id=133 op=LOAD Jun 25 16:33:10.260706 kernel: kauditd_printk_skb: 46 callbacks suppressed Jun 25 16:33:10.260903 kernel: audit: type=1334 audit(1719333190.141:501): prog-id=133 op=LOAD Jun 25 16:33:10.141000 audit[3119]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.267314 kernel: audit: type=1300 audit(1719333190.141:501): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.267675 kernel: audit: type=1327 audit(1719333190.141:501): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373262633738613731323933333337363966623838336465383735 Jun 25 16:33:10.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373262633738613731323933333337363966623838336465383735 Jun 25 16:33:10.141000 audit: BPF prog-id=134 op=LOAD Jun 25 16:33:10.275129 kernel: audit: type=1334 audit(1719333190.141:502): prog-id=134 op=LOAD Jun 25 16:33:10.275413 kernel: audit: type=1300 audit(1719333190.141:502): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.141000 audit[3119]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.283722 kernel: audit: type=1327 audit(1719333190.141:502): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373262633738613731323933333337363966623838336465383735 Jun 25 16:33:10.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373262633738613731323933333337363966623838336465383735 Jun 25 16:33:10.141000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:33:10.295887 kernel: audit: type=1334 audit(1719333190.141:503): prog-id=134 op=UNLOAD Jun 25 16:33:10.296042 kernel: audit: type=1334 audit(1719333190.141:504): prog-id=133 op=UNLOAD Jun 25 16:33:10.296070 kernel: audit: type=1334 audit(1719333190.141:505): prog-id=135 op=LOAD Jun 25 16:33:10.296094 kernel: audit: type=1300 audit(1719333190.141:505): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.141000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:33:10.141000 audit: BPF prog-id=135 op=LOAD Jun 25 16:33:10.141000 audit[3119]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3009 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:10.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373262633738613731323933333337363966623838336465383735 Jun 25 16:33:11.487037 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:54090.service - OpenSSH per-connection server daemon (10.0.0.1:54090). Jun 25 16:33:11.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:54090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:13.043000 audit[3148]: USER_ACCT pid=3148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.045400 containerd[1288]: time="2024-06-25T16:33:13.045076903Z" level=info msg="StartContainer for \"6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d\" returns successfully" Jun 25 16:33:13.045790 sshd[3148]: Accepted publickey for core from 10.0.0.1 port 54090 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:13.045000 audit[3148]: CRED_ACQ pid=3148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.045000 audit[3148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc71a3fd0 a2=3 a3=7f41a794d480 items=0 ppid=1 pid=3148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:13.045000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:13.048133 sshd[3148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:13.056760 kubelet[2286]: E0625 16:33:13.056635 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:13.141530 systemd-logind[1274]: New session 8 of user core. Jun 25 16:33:13.148952 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:33:13.152000 audit[3148]: USER_START pid=3148 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.154000 audit[3152]: CRED_ACQ pid=3152 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.403595 sshd[3148]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:13.404000 audit[3148]: USER_END pid=3148 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.405000 audit[3148]: CRED_DISP pid=3148 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:13.408201 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:54090.service: Deactivated successfully. Jun 25 16:33:13.409187 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:33:13.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:54090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:13.414599 systemd-logind[1274]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:33:13.417101 systemd-logind[1274]: Removed session 8. Jun 25 16:33:14.058572 kubelet[2286]: E0625 16:33:14.058541 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:15.133663 kubelet[2286]: E0625 16:33:15.133620 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:17.131954 kubelet[2286]: E0625 16:33:17.131882 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:17.650130 containerd[1288]: time="2024-06-25T16:33:17.650093095Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:33:17.668103 systemd[1]: cri-containerd-6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d.scope: Deactivated successfully. Jun 25 16:33:17.671000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:33:17.673964 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:33:17.674037 kernel: audit: type=1334 audit(1719333197.671:515): prog-id=135 op=UNLOAD Jun 25 16:33:17.698461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d-rootfs.mount: Deactivated successfully. Jun 25 16:33:17.726425 kubelet[2286]: I0625 16:33:17.726381 2286 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:33:17.931112 kubelet[2286]: I0625 16:33:17.906640 2286 topology_manager.go:215] "Topology Admit Handler" podUID="b863dc49-acd3-403d-a912-7a94220388dd" podNamespace="kube-system" podName="coredns-5dd5756b68-vschb" Jun 25 16:33:17.931112 kubelet[2286]: I0625 16:33:17.909889 2286 topology_manager.go:215] "Topology Admit Handler" podUID="18658f3c-a24d-421b-be97-f9cb52930d97" podNamespace="calico-system" podName="calico-kube-controllers-7dfd458b6c-tdlbz" Jun 25 16:33:17.931112 kubelet[2286]: I0625 16:33:17.910066 2286 topology_manager.go:215] "Topology Admit Handler" podUID="1ecf1669-3c1d-4bb9-be93-082a2bca0c94" podNamespace="kube-system" podName="coredns-5dd5756b68-p68b4" Jun 25 16:33:17.914943 systemd[1]: Created slice kubepods-burstable-podb863dc49_acd3_403d_a912_7a94220388dd.slice - libcontainer container kubepods-burstable-podb863dc49_acd3_403d_a912_7a94220388dd.slice. Jun 25 16:33:17.920978 systemd[1]: Created slice kubepods-burstable-pod1ecf1669_3c1d_4bb9_be93_082a2bca0c94.slice - libcontainer container kubepods-burstable-pod1ecf1669_3c1d_4bb9_be93_082a2bca0c94.slice. Jun 25 16:33:17.926361 systemd[1]: Created slice kubepods-besteffort-pod18658f3c_a24d_421b_be97_f9cb52930d97.slice - libcontainer container kubepods-besteffort-pod18658f3c_a24d_421b_be97_f9cb52930d97.slice. Jun 25 16:33:17.932851 containerd[1288]: time="2024-06-25T16:33:17.932709449Z" level=info msg="shim disconnected" id=6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d namespace=k8s.io Jun 25 16:33:17.933040 containerd[1288]: time="2024-06-25T16:33:17.932860789Z" level=warning msg="cleaning up after shim disconnected" id=6672bc78a7129333769fb883de8758a53b68198d1263e8797f456e5a5305572d namespace=k8s.io Jun 25 16:33:17.933040 containerd[1288]: time="2024-06-25T16:33:17.932874556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:33:18.061135 kubelet[2286]: I0625 16:33:18.061039 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnz78\" (UniqueName: \"kubernetes.io/projected/18658f3c-a24d-421b-be97-f9cb52930d97-kube-api-access-wnz78\") pod \"calico-kube-controllers-7dfd458b6c-tdlbz\" (UID: \"18658f3c-a24d-421b-be97-f9cb52930d97\") " pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" Jun 25 16:33:18.061135 kubelet[2286]: I0625 16:33:18.061137 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b863dc49-acd3-403d-a912-7a94220388dd-config-volume\") pod \"coredns-5dd5756b68-vschb\" (UID: \"b863dc49-acd3-403d-a912-7a94220388dd\") " pod="kube-system/coredns-5dd5756b68-vschb" Jun 25 16:33:18.061555 kubelet[2286]: I0625 16:33:18.061218 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ecf1669-3c1d-4bb9-be93-082a2bca0c94-config-volume\") pod \"coredns-5dd5756b68-p68b4\" (UID: \"1ecf1669-3c1d-4bb9-be93-082a2bca0c94\") " pod="kube-system/coredns-5dd5756b68-p68b4" Jun 25 16:33:18.061555 kubelet[2286]: I0625 16:33:18.061249 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllfq\" (UniqueName: \"kubernetes.io/projected/b863dc49-acd3-403d-a912-7a94220388dd-kube-api-access-wllfq\") pod \"coredns-5dd5756b68-vschb\" (UID: \"b863dc49-acd3-403d-a912-7a94220388dd\") " pod="kube-system/coredns-5dd5756b68-vschb" Jun 25 16:33:18.061555 kubelet[2286]: I0625 16:33:18.061283 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18658f3c-a24d-421b-be97-f9cb52930d97-tigera-ca-bundle\") pod \"calico-kube-controllers-7dfd458b6c-tdlbz\" (UID: \"18658f3c-a24d-421b-be97-f9cb52930d97\") " pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" Jun 25 16:33:18.061555 kubelet[2286]: I0625 16:33:18.061308 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x2l\" (UniqueName: \"kubernetes.io/projected/1ecf1669-3c1d-4bb9-be93-082a2bca0c94-kube-api-access-96x2l\") pod \"coredns-5dd5756b68-p68b4\" (UID: \"1ecf1669-3c1d-4bb9-be93-082a2bca0c94\") " pod="kube-system/coredns-5dd5756b68-p68b4" Jun 25 16:33:18.096903 kubelet[2286]: E0625 16:33:18.096865 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:18.098527 containerd[1288]: time="2024-06-25T16:33:18.098471030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:33:18.417307 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:56142.service - OpenSSH per-connection server daemon (10.0.0.1:56142). Jun 25 16:33:18.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:56142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:18.424040 kernel: audit: type=1130 audit(1719333198.416:516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:56142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:18.488000 audit[3191]: USER_ACCT pid=3191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.495316 sshd[3191]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:18.495786 kernel: audit: type=1101 audit(1719333198.488:517): pid=3191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.494000 audit[3191]: CRED_ACQ pid=3191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.496542 sshd[3191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:18.500816 kernel: audit: type=1103 audit(1719333198.494:518): pid=3191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.506455 systemd-logind[1274]: New session 9 of user core. Jun 25 16:33:18.548906 kernel: audit: type=1006 audit(1719333198.494:519): pid=3191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:33:18.549230 kernel: audit: type=1300 audit(1719333198.494:519): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3a9df440 a2=3 a3=7f584437e480 items=0 ppid=1 pid=3191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:18.549270 kernel: audit: type=1327 audit(1719333198.494:519): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:18.494000 audit[3191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3a9df440 a2=3 a3=7f584437e480 items=0 ppid=1 pid=3191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:18.494000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:18.549641 kubelet[2286]: E0625 16:33:18.537170 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:18.549964 containerd[1288]: time="2024-06-25T16:33:18.538141755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vschb,Uid:b863dc49-acd3-403d-a912-7a94220388dd,Namespace:kube-system,Attempt:0,}" Jun 25 16:33:18.545078 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:33:18.584000 audit[3191]: USER_START pid=3191 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.586000 audit[3196]: CRED_ACQ pid=3196 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.593981 kernel: audit: type=1105 audit(1719333198.584:520): pid=3191 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.594128 kernel: audit: type=1103 audit(1719333198.586:521): pid=3196 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.833243 kubelet[2286]: E0625 16:33:18.831900 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:18.836208 containerd[1288]: time="2024-06-25T16:33:18.834040079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd458b6c-tdlbz,Uid:18658f3c-a24d-421b-be97-f9cb52930d97,Namespace:calico-system,Attempt:0,}" Jun 25 16:33:18.841039 containerd[1288]: time="2024-06-25T16:33:18.838508779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p68b4,Uid:1ecf1669-3c1d-4bb9-be93-082a2bca0c94,Namespace:kube-system,Attempt:0,}" Jun 25 16:33:18.876031 sshd[3191]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:18.876000 audit[3191]: USER_END pid=3191 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.880109 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:56142.service: Deactivated successfully. Jun 25 16:33:18.881002 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:33:18.882359 systemd-logind[1274]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:33:18.883475 systemd-logind[1274]: Removed session 9. Jun 25 16:33:18.876000 audit[3191]: CRED_DISP pid=3191 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:18.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:56142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:18.887308 kernel: audit: type=1106 audit(1719333198.876:522): pid=3191 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:19.143148 systemd[1]: Created slice kubepods-besteffort-pod72bf43a2_ad8b_409f_8c68_9b745ebeb647.slice - libcontainer container kubepods-besteffort-pod72bf43a2_ad8b_409f_8c68_9b745ebeb647.slice. Jun 25 16:33:19.159617 containerd[1288]: time="2024-06-25T16:33:19.159565679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xkz9,Uid:72bf43a2-ad8b-409f-8c68-9b745ebeb647,Namespace:calico-system,Attempt:0,}" Jun 25 16:33:19.246540 containerd[1288]: time="2024-06-25T16:33:19.246429960Z" level=error msg="Failed to destroy network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.247879 containerd[1288]: time="2024-06-25T16:33:19.247798733Z" level=error msg="encountered an error cleaning up failed sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.248103 containerd[1288]: time="2024-06-25T16:33:19.248060314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vschb,Uid:b863dc49-acd3-403d-a912-7a94220388dd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.248549 kubelet[2286]: E0625 16:33:19.248489 2286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.248904 kubelet[2286]: E0625 16:33:19.248577 2286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vschb" Jun 25 16:33:19.248904 kubelet[2286]: E0625 16:33:19.248608 2286 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vschb" Jun 25 16:33:19.248904 kubelet[2286]: E0625 16:33:19.248681 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-vschb_kube-system(b863dc49-acd3-403d-a912-7a94220388dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-vschb_kube-system(b863dc49-acd3-403d-a912-7a94220388dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vschb" podUID="b863dc49-acd3-403d-a912-7a94220388dd" Jun 25 16:33:19.255358 containerd[1288]: time="2024-06-25T16:33:19.255260464Z" level=error msg="Failed to destroy network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.273930 containerd[1288]: time="2024-06-25T16:33:19.273526612Z" level=error msg="encountered an error cleaning up failed sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.273930 containerd[1288]: time="2024-06-25T16:33:19.273636392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p68b4,Uid:1ecf1669-3c1d-4bb9-be93-082a2bca0c94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.274155 kubelet[2286]: E0625 16:33:19.273936 2286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.274155 kubelet[2286]: E0625 16:33:19.274002 2286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p68b4" Jun 25 16:33:19.274155 kubelet[2286]: E0625 16:33:19.274025 2286 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p68b4" Jun 25 16:33:19.279469 kubelet[2286]: E0625 16:33:19.274099 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-p68b4_kube-system(1ecf1669-3c1d-4bb9-be93-082a2bca0c94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-p68b4_kube-system(1ecf1669-3c1d-4bb9-be93-082a2bca0c94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p68b4" podUID="1ecf1669-3c1d-4bb9-be93-082a2bca0c94" Jun 25 16:33:19.348237 containerd[1288]: time="2024-06-25T16:33:19.347702014Z" level=error msg="Failed to destroy network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.348458 containerd[1288]: time="2024-06-25T16:33:19.348311812Z" level=error msg="encountered an error cleaning up failed sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.348458 containerd[1288]: time="2024-06-25T16:33:19.348385904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd458b6c-tdlbz,Uid:18658f3c-a24d-421b-be97-f9cb52930d97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.349778 kubelet[2286]: E0625 16:33:19.348682 2286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.349778 kubelet[2286]: E0625 16:33:19.348828 2286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" Jun 25 16:33:19.349778 kubelet[2286]: E0625 16:33:19.348872 2286 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" Jun 25 16:33:19.350002 kubelet[2286]: E0625 16:33:19.348940 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dfd458b6c-tdlbz_calico-system(18658f3c-a24d-421b-be97-f9cb52930d97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dfd458b6c-tdlbz_calico-system(18658f3c-a24d-421b-be97-f9cb52930d97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" podUID="18658f3c-a24d-421b-be97-f9cb52930d97" Jun 25 16:33:19.498220 containerd[1288]: time="2024-06-25T16:33:19.498126070Z" level=error msg="Failed to destroy network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.504792 containerd[1288]: time="2024-06-25T16:33:19.503233920Z" level=error msg="encountered an error cleaning up failed sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.504792 containerd[1288]: time="2024-06-25T16:33:19.503345213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xkz9,Uid:72bf43a2-ad8b-409f-8c68-9b745ebeb647,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.505009 kubelet[2286]: E0625 16:33:19.503652 2286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:19.505009 kubelet[2286]: E0625 16:33:19.503717 2286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:33:19.505009 kubelet[2286]: E0625 16:33:19.503764 2286 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xkz9" Jun 25 16:33:19.505129 kubelet[2286]: E0625 16:33:19.503832 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7xkz9_calico-system(72bf43a2-ad8b-409f-8c68-9b745ebeb647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7xkz9_calico-system(72bf43a2-ad8b-409f-8c68-9b745ebeb647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:19.958532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166-shm.mount: Deactivated successfully. Jun 25 16:33:19.958630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b-shm.mount: Deactivated successfully. Jun 25 16:33:19.958690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273-shm.mount: Deactivated successfully. Jun 25 16:33:20.106390 kubelet[2286]: I0625 16:33:20.106313 2286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:20.107222 containerd[1288]: time="2024-06-25T16:33:20.107102276Z" level=info msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" Jun 25 16:33:20.107453 containerd[1288]: time="2024-06-25T16:33:20.107357344Z" level=info msg="Ensure that sandbox 665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b in task-service has been cleanup successfully" Jun 25 16:33:20.113120 kubelet[2286]: I0625 16:33:20.109617 2286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:20.113295 containerd[1288]: time="2024-06-25T16:33:20.110236689Z" level=info msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" Jun 25 16:33:20.113295 containerd[1288]: time="2024-06-25T16:33:20.110534349Z" level=info msg="Ensure that sandbox 56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b in task-service has been cleanup successfully" Jun 25 16:33:20.117990 kubelet[2286]: I0625 16:33:20.117322 2286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:20.118171 containerd[1288]: time="2024-06-25T16:33:20.117959783Z" level=info msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" Jun 25 16:33:20.118234 containerd[1288]: time="2024-06-25T16:33:20.118209280Z" level=info msg="Ensure that sandbox c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273 in task-service has been cleanup successfully" Jun 25 16:33:20.125099 kubelet[2286]: I0625 16:33:20.124663 2286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:20.125245 containerd[1288]: time="2024-06-25T16:33:20.125164653Z" level=info msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" Jun 25 16:33:20.125443 containerd[1288]: time="2024-06-25T16:33:20.125412238Z" level=info msg="Ensure that sandbox fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166 in task-service has been cleanup successfully" Jun 25 16:33:20.162669 containerd[1288]: time="2024-06-25T16:33:20.162384877Z" level=error msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" failed" error="failed to destroy network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:20.163283 kubelet[2286]: E0625 16:33:20.163079 2286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:20.163283 kubelet[2286]: E0625 16:33:20.163147 2286 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b"} Jun 25 16:33:20.163283 kubelet[2286]: E0625 16:33:20.163193 2286 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:33:20.163283 kubelet[2286]: E0625 16:33:20.163235 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72bf43a2-ad8b-409f-8c68-9b745ebeb647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xkz9" podUID="72bf43a2-ad8b-409f-8c68-9b745ebeb647" Jun 25 16:33:20.186407 containerd[1288]: time="2024-06-25T16:33:20.184349199Z" level=error msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" failed" error="failed to destroy network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:20.186592 kubelet[2286]: E0625 16:33:20.185666 2286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:20.186592 kubelet[2286]: E0625 16:33:20.186407 2286 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b"} Jun 25 16:33:20.186592 kubelet[2286]: E0625 16:33:20.186453 2286 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ecf1669-3c1d-4bb9-be93-082a2bca0c94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:33:20.186592 kubelet[2286]: E0625 16:33:20.186496 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ecf1669-3c1d-4bb9-be93-082a2bca0c94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p68b4" podUID="1ecf1669-3c1d-4bb9-be93-082a2bca0c94" Jun 25 16:33:20.207244 containerd[1288]: time="2024-06-25T16:33:20.207136137Z" level=error msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" failed" error="failed to destroy network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:20.212132 kubelet[2286]: E0625 16:33:20.207456 2286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:20.212132 kubelet[2286]: E0625 16:33:20.207511 2286 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273"} Jun 25 16:33:20.212132 kubelet[2286]: E0625 16:33:20.207569 2286 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b863dc49-acd3-403d-a912-7a94220388dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:33:20.212132 kubelet[2286]: E0625 16:33:20.207609 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b863dc49-acd3-403d-a912-7a94220388dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vschb" podUID="b863dc49-acd3-403d-a912-7a94220388dd" Jun 25 16:33:20.220002 containerd[1288]: time="2024-06-25T16:33:20.219871750Z" level=error msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" failed" error="failed to destroy network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:33:20.220298 kubelet[2286]: E0625 16:33:20.220240 2286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:20.220298 kubelet[2286]: E0625 16:33:20.220299 2286 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166"} Jun 25 16:33:20.220410 kubelet[2286]: E0625 16:33:20.220348 2286 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18658f3c-a24d-421b-be97-f9cb52930d97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:33:20.220410 kubelet[2286]: E0625 16:33:20.220389 2286 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18658f3c-a24d-421b-be97-f9cb52930d97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" podUID="18658f3c-a24d-421b-be97-f9cb52930d97" Jun 25 16:33:20.239000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:20.239000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002743a10 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:20.239000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:20.241000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:20.241000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f026e0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:20.241000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:20.996000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:20.996000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5d a1=c0058c4d60 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:20.996000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:20.996000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:20.996000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5d a1=c00b6b2ae0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:20.996000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:20.996000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:20.996000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5e a1=c0100b4d20 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:20.996000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:21.001000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6279 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:21.001000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5d a1=c0100b4d80 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:21.001000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:21.020000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:21.020000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5d a1=c0100b56e0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:21.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:21.020000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:21.020000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5e a1=c00c0649e0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:33:21.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:33:23.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:56156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:23.887426 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:56156.service - OpenSSH per-connection server daemon (10.0.0.1:56156). Jun 25 16:33:23.929013 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:33:23.929252 kernel: audit: type=1130 audit(1719333203.886:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:56156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:24.177473 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 56156 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:24.176000 audit[3453]: USER_ACCT pid=3453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.180002 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:24.183977 kernel: audit: type=1101 audit(1719333204.176:534): pid=3453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.184121 kernel: audit: type=1103 audit(1719333204.176:535): pid=3453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.176000 audit[3453]: CRED_ACQ pid=3453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.188712 systemd-logind[1274]: New session 10 of user core. Jun 25 16:33:24.228300 kernel: audit: type=1006 audit(1719333204.176:536): pid=3453 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:33:24.228343 kernel: audit: type=1300 audit(1719333204.176:536): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc9cd9530 a2=3 a3=7f6bca4e7480 items=0 ppid=1 pid=3453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:24.228363 kernel: audit: type=1327 audit(1719333204.176:536): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:24.176000 audit[3453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc9cd9530 a2=3 a3=7f6bca4e7480 items=0 ppid=1 pid=3453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:24.176000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:24.227159 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:33:24.246000 audit[3453]: USER_START pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.247000 audit[3459]: CRED_ACQ pid=3459 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.269309 kernel: audit: type=1105 audit(1719333204.246:537): pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.269492 kernel: audit: type=1103 audit(1719333204.247:538): pid=3459 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.436145 sshd[3453]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:24.436000 audit[3453]: USER_END pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.439358 systemd-logind[1274]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:33:24.439709 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:56156.service: Deactivated successfully. Jun 25 16:33:24.440425 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:33:24.441300 systemd-logind[1274]: Removed session 10. Jun 25 16:33:24.437000 audit[3453]: CRED_DISP pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.468866 kernel: audit: type=1106 audit(1719333204.436:539): pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.469080 kernel: audit: type=1104 audit(1719333204.437:540): pid=3453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:24.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:56156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:26.128700 containerd[1288]: time="2024-06-25T16:33:26.128647917Z" level=info msg="StopPodSandbox for \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\"" Jun 25 16:33:26.129217 containerd[1288]: time="2024-06-25T16:33:26.128765170Z" level=info msg="TearDown network for sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" successfully" Jun 25 16:33:26.129217 containerd[1288]: time="2024-06-25T16:33:26.128812069Z" level=info msg="StopPodSandbox for \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" returns successfully" Jun 25 16:33:26.129865 containerd[1288]: time="2024-06-25T16:33:26.129813452Z" level=info msg="RemovePodSandbox for \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\"" Jun 25 16:33:26.145780 containerd[1288]: time="2024-06-25T16:33:26.129877455Z" level=info msg="Forcibly stopping sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\"" Jun 25 16:33:26.146111 containerd[1288]: time="2024-06-25T16:33:26.146082949Z" level=info msg="TearDown network for sandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" successfully" Jun 25 16:33:26.492548 containerd[1288]: time="2024-06-25T16:33:26.490399013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:33:26.492548 containerd[1288]: time="2024-06-25T16:33:26.490544861Z" level=info msg="RemovePodSandbox \"be4c9e5f80dab2c190eb254407ba39ea102879a37f9cd3123cee92524580f3da\" returns successfully" Jun 25 16:33:28.583000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:28.583000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00149c160 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:28.583000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:28.583000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:28.583000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001e175e0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:28.583000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:29.059800 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:33:29.059964 kernel: audit: type=1400 audit(1719333209.057:544): avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:29.057000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:29.057000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00149c180 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:29.080695 kernel: audit: type=1300 audit(1719333209.057:544): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00149c180 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:29.081782 kernel: audit: type=1327 audit(1719333209.057:544): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:29.057000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:29.060000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:29.060000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028591a0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:29.118157 kernel: audit: type=1400 audit(1719333209.060:545): avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:29.118313 kernel: audit: type=1300 audit(1719333209.060:545): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028591a0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:33:29.060000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:29.122880 kernel: audit: type=1327 audit(1719333209.060:545): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:29.455492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905592019.mount: Deactivated successfully. Jun 25 16:33:29.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:29.468843 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:56372.service - OpenSSH per-connection server daemon (10.0.0.1:56372). Jun 25 16:33:29.474937 kernel: audit: type=1130 audit(1719333209.468:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:29.488231 containerd[1288]: time="2024-06-25T16:33:29.488166775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:29.490438 containerd[1288]: time="2024-06-25T16:33:29.490350518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:33:29.493678 containerd[1288]: time="2024-06-25T16:33:29.493636666Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:29.502419 containerd[1288]: time="2024-06-25T16:33:29.502368387Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:29.506667 containerd[1288]: time="2024-06-25T16:33:29.506629192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:29.507706 containerd[1288]: time="2024-06-25T16:33:29.507672892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 11.409144012s" Jun 25 16:33:29.507812 containerd[1288]: time="2024-06-25T16:33:29.507792740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:33:29.526000 audit[3473]: USER_ACCT pid=3473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.532262 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 56372 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:29.532908 containerd[1288]: time="2024-06-25T16:33:29.532858149Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:33:29.534797 kernel: audit: type=1101 audit(1719333209.526:547): pid=3473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.534967 kernel: audit: type=1103 audit(1719333209.532:548): pid=3473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.532000 audit[3473]: CRED_ACQ pid=3473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.536177 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:29.534000 audit[3473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe26cb7ab0 a2=3 a3=7f930e7c2480 items=0 ppid=1 pid=3473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:29.534000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:29.542777 kernel: audit: type=1006 audit(1719333209.534:549): pid=3473 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:33:29.551502 systemd-logind[1274]: New session 11 of user core. Jun 25 16:33:29.562173 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:33:29.614827 containerd[1288]: time="2024-06-25T16:33:29.611507511Z" level=info msg="CreateContainer within sandbox \"bc5b10ebc7059c4ad60fc3c749e2e1a5ad5706d92d524e757b3ddb9ef36070bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5\"" Jun 25 16:33:29.615286 containerd[1288]: time="2024-06-25T16:33:29.615249036Z" level=info msg="StartContainer for \"913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5\"" Jun 25 16:33:29.617000 audit[3473]: USER_START pid=3473 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.624000 audit[3478]: CRED_ACQ pid=3478 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.774072 systemd[1]: Started cri-containerd-913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5.scope - libcontainer container 913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5. Jun 25 16:33:29.797000 audit: BPF prog-id=136 op=LOAD Jun 25 16:33:29.797000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3009 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:29.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931336133633064373462636636313563333166653464653166306534 Jun 25 16:33:29.797000 audit: BPF prog-id=137 op=LOAD Jun 25 16:33:29.797000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3009 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:29.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931336133633064373462636636313563333166653464653166306534 Jun 25 16:33:29.797000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:33:29.797000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:33:29.797000 audit: BPF prog-id=138 op=LOAD Jun 25 16:33:29.797000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3009 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:29.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931336133633064373462636636313563333166653464653166306534 Jun 25 16:33:29.830468 containerd[1288]: time="2024-06-25T16:33:29.829394315Z" level=info msg="StartContainer for \"913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5\" returns successfully" Jun 25 16:33:29.856364 sshd[3473]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:29.862000 audit[3473]: USER_END pid=3473 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.862000 audit[3473]: CRED_DISP pid=3473 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:29.866291 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:56372.service: Deactivated successfully. Jun 25 16:33:29.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:29.867306 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:33:29.868653 systemd-logind[1274]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:33:29.870446 systemd-logind[1274]: Removed session 11. Jun 25 16:33:29.953319 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:33:29.954087 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:33:30.159817 kubelet[2286]: E0625 16:33:30.159027 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:31.163150 kubelet[2286]: E0625 16:33:31.163107 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:31.757000 audit[3626]: AVC avc: denied { write } for pid=3626 comm="tee" name="fd" dev="proc" ino=26817 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.757000 audit[3626]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc25395a1b a2=241 a3=1b6 items=1 ppid=3611 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.757000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:33:31.757000 audit: PATH item=0 name="/dev/fd/63" inode=26022 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.763000 audit[3643]: AVC avc: denied { write } for pid=3643 comm="tee" name="fd" dev="proc" ino=27751 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.763000 audit[3643]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd681d3a2b a2=241 a3=1b6 items=1 ppid=3608 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.763000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:33:31.763000 audit: PATH item=0 name="/dev/fd/63" inode=26027 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.763000 audit[3646]: AVC avc: denied { write } for pid=3646 comm="tee" name="fd" dev="proc" ino=26033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.763000 audit[3646]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff00e30a2a a2=241 a3=1b6 items=1 ppid=3613 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.763000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:33:31.763000 audit: PATH item=0 name="/dev/fd/63" inode=26028 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.767000 audit[3650]: AVC avc: denied { write } for pid=3650 comm="tee" name="fd" dev="proc" ino=27757 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.767000 audit[3650]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca577ca2c a2=241 a3=1b6 items=1 ppid=3606 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.767000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:33:31.767000 audit: PATH item=0 name="/dev/fd/63" inode=27748 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.777000 audit[3659]: AVC avc: denied { write } for pid=3659 comm="tee" name="fd" dev="proc" ino=27767 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.777000 audit[3659]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef1907a2a a2=241 a3=1b6 items=1 ppid=3614 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.777000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:33:31.777000 audit: PATH item=0 name="/dev/fd/63" inode=26037 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.779000 audit[3661]: AVC avc: denied { write } for pid=3661 comm="tee" name="fd" dev="proc" ino=26045 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.779000 audit[3661]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffede8d1a1a a2=241 a3=1b6 items=1 ppid=3605 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.779000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:33:31.779000 audit: PATH item=0 name="/dev/fd/63" inode=26040 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:31.791000 audit[3672]: AVC avc: denied { write } for pid=3672 comm="tee" name="fd" dev="proc" ino=27775 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:33:31.791000 audit[3672]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc4d93a2a a2=241 a3=1b6 items=1 ppid=3618 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:31.791000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:33:31.791000 audit: PATH item=0 name="/dev/fd/63" inode=25414 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:33:31.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:33:32.109228 systemd-networkd[1111]: vxlan.calico: Link UP Jun 25 16:33:32.109358 systemd-networkd[1111]: vxlan.calico: Gained carrier Jun 25 16:33:32.126000 audit: BPF prog-id=139 op=LOAD Jun 25 16:33:32.126000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6e3059a0 a2=70 a3=7fba21d3a000 items=0 ppid=3654 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.126000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:33:32.126000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:33:32.126000 audit: BPF prog-id=140 op=LOAD Jun 25 16:33:32.126000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6e3059a0 a2=70 a3=6f items=0 ppid=3654 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.126000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:33:32.126000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:33:32.127000 audit: BPF prog-id=141 op=LOAD Jun 25 16:33:32.127000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe6e305930 a2=70 a3=7ffe6e3059a0 items=0 ppid=3654 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.127000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:33:32.127000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:33:32.127000 audit: BPF prog-id=142 op=LOAD Jun 25 16:33:32.127000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe6e305960 a2=70 a3=0 items=0 ppid=3654 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.127000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:33:32.133803 containerd[1288]: time="2024-06-25T16:33:32.132832906Z" level=info msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" Jun 25 16:33:32.133803 containerd[1288]: time="2024-06-25T16:33:32.133247605Z" level=info msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" Jun 25 16:33:32.162000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:33:32.265000 audit[3838]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3838 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:32.265000 audit[3838]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcd8b4e2d0 a2=0 a3=7ffcd8b4e2bc items=0 ppid=3654 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.265000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:32.272000 audit[3835]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3835 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:32.272000 audit[3835]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffdfcde64f0 a2=0 a3=7ffdfcde64dc items=0 ppid=3654 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:32.272000 audit[3836]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:32.272000 audit[3836]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff231f7af0 a2=0 a3=5609f7221000 items=0 ppid=3654 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:32.277000 audit[3840]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:32.277000 audit[3840]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fff0184e760 a2=0 a3=7fff0184e74c items=0 ppid=3654 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:32.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:32.473206 kubelet[2286]: I0625 16:33:32.473079 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9bzkd" podStartSLOduration=5.340974838 podCreationTimestamp="2024-06-25 16:32:59 +0000 UTC" firstStartedPulling="2024-06-25 16:33:01.378771606 +0000 UTC m=+35.379051098" lastFinishedPulling="2024-06-25 16:33:29.51082386 +0000 UTC m=+63.511103352" observedRunningTime="2024-06-25 16:33:30.186865713 +0000 UTC m=+64.187145215" watchObservedRunningTime="2024-06-25 16:33:32.473027092 +0000 UTC m=+66.473306584" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] k8s.go 608: Cleaning up netns ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" iface="eth0" netns="/var/run/netns/cni-acb0f99c-b3e1-ac04-b9c0-e4bf5484bce1" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" iface="eth0" netns="/var/run/netns/cni-acb0f99c-b3e1-ac04-b9c0-e4bf5484bce1" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" iface="eth0" netns="/var/run/netns/cni-acb0f99c-b3e1-ac04-b9c0-e4bf5484bce1" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] k8s.go 615: Releasing IP address(es) ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.540 [INFO][3802] utils.go 188: Calico CNI releasing IP address ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.647 [INFO][3849] ipam_plugin.go 411: Releasing address using handleID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.648 [INFO][3849] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.650 [INFO][3849] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.708 [WARNING][3849] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.709 [INFO][3849] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.714 [INFO][3849] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:32.742995 containerd[1288]: 2024-06-25 16:33:32.725 [INFO][3802] k8s.go 621: Teardown processing complete. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:33:32.755505 systemd[1]: run-netns-cni\x2dacb0f99c\x2db3e1\x2dac04\x2db9c0\x2de4bf5484bce1.mount: Deactivated successfully. Jun 25 16:33:32.772217 containerd[1288]: time="2024-06-25T16:33:32.765312801Z" level=info msg="TearDown network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" successfully" Jun 25 16:33:32.772217 containerd[1288]: time="2024-06-25T16:33:32.765365722Z" level=info msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" returns successfully" Jun 25 16:33:32.773450 kubelet[2286]: E0625 16:33:32.772811 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:32.773874 containerd[1288]: time="2024-06-25T16:33:32.773832818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vschb,Uid:b863dc49-acd3-403d-a912-7a94220388dd,Namespace:kube-system,Attempt:1,}" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.473 [INFO][3803] k8s.go 608: Cleaning up netns ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.473 [INFO][3803] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" iface="eth0" netns="/var/run/netns/cni-afd9face-19e4-5af8-fe5f-7436f3ab66d0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.474 [INFO][3803] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" iface="eth0" netns="/var/run/netns/cni-afd9face-19e4-5af8-fe5f-7436f3ab66d0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.475 [INFO][3803] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" iface="eth0" netns="/var/run/netns/cni-afd9face-19e4-5af8-fe5f-7436f3ab66d0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.475 [INFO][3803] k8s.go 615: Releasing IP address(es) ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.475 [INFO][3803] utils.go 188: Calico CNI releasing IP address ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.647 [INFO][3844] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.648 [INFO][3844] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.715 [INFO][3844] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.739 [WARNING][3844] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.739 [INFO][3844] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.765 [INFO][3844] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:32.785198 containerd[1288]: 2024-06-25 16:33:32.782 [INFO][3803] k8s.go 621: Teardown processing complete. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:33:32.791233 systemd[1]: run-netns-cni\x2dafd9face\x2d19e4\x2d5af8\x2dfe5f\x2d7436f3ab66d0.mount: Deactivated successfully. Jun 25 16:33:32.795449 containerd[1288]: time="2024-06-25T16:33:32.795374147Z" level=info msg="TearDown network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" successfully" Jun 25 16:33:32.795449 containerd[1288]: time="2024-06-25T16:33:32.795431406Z" level=info msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" returns successfully" Jun 25 16:33:32.796728 containerd[1288]: time="2024-06-25T16:33:32.796683641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd458b6c-tdlbz,Uid:18658f3c-a24d-421b-be97-f9cb52930d97,Namespace:calico-system,Attempt:1,}" Jun 25 16:33:33.132289 containerd[1288]: time="2024-06-25T16:33:33.132129117Z" level=info msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.419 [INFO][3878] k8s.go 608: Cleaning up netns ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.420 [INFO][3878] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" iface="eth0" netns="/var/run/netns/cni-787bd3cf-1d79-63ee-8ec8-e464934203d0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.420 [INFO][3878] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" iface="eth0" netns="/var/run/netns/cni-787bd3cf-1d79-63ee-8ec8-e464934203d0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.421 [INFO][3878] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" iface="eth0" netns="/var/run/netns/cni-787bd3cf-1d79-63ee-8ec8-e464934203d0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.421 [INFO][3878] k8s.go 615: Releasing IP address(es) ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.421 [INFO][3878] utils.go 188: Calico CNI releasing IP address ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.493 [INFO][3886] ipam_plugin.go 411: Releasing address using handleID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.494 [INFO][3886] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.494 [INFO][3886] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.505 [WARNING][3886] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.505 [INFO][3886] ipam_plugin.go 439: Releasing address using workloadID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.517 [INFO][3886] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:33.538393 containerd[1288]: 2024-06-25 16:33:33.525 [INFO][3878] k8s.go 621: Teardown processing complete. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:33:33.542115 systemd[1]: run-netns-cni\x2d787bd3cf\x2d1d79\x2d63ee\x2d8ec8\x2de464934203d0.mount: Deactivated successfully. Jun 25 16:33:33.550125 containerd[1288]: time="2024-06-25T16:33:33.550054930Z" level=info msg="TearDown network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" successfully" Jun 25 16:33:33.550125 containerd[1288]: time="2024-06-25T16:33:33.550119102Z" level=info msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" returns successfully" Jun 25 16:33:33.554266 containerd[1288]: time="2024-06-25T16:33:33.550817341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xkz9,Uid:72bf43a2-ad8b-409f-8c68-9b745ebeb647,Namespace:calico-system,Attempt:1,}" Jun 25 16:33:33.903639 systemd-networkd[1111]: vxlan.calico: Gained IPv6LL Jun 25 16:33:34.204497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:33:34.204619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calideb3e38e492: link becomes ready Jun 25 16:33:34.207371 systemd-networkd[1111]: calideb3e38e492: Link UP Jun 25 16:33:34.207538 systemd-networkd[1111]: calideb3e38e492: Gained carrier Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:33.965 [INFO][3893] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--vschb-eth0 coredns-5dd5756b68- kube-system b863dc49-acd3-403d-a912-7a94220388dd 902 0 2024-06-25 16:32:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-vschb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calideb3e38e492 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:33.965 [INFO][3893] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.046 [INFO][3921] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" HandleID="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.063 [INFO][3921] ipam_plugin.go 264: Auto assigning IP ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" HandleID="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-vschb", "timestamp":"2024-06-25 16:33:34.046267701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.063 [INFO][3921] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.063 [INFO][3921] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.063 [INFO][3921] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.072 [INFO][3921] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.098 [INFO][3921] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.122 [INFO][3921] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.132 [INFO][3921] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.150 [INFO][3921] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.151 [INFO][3921] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.153 [INFO][3921] ipam.go 1685: Creating new handle: k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715 Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.164 [INFO][3921] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.192 [INFO][3921] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.192 [INFO][3921] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" host="localhost" Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.192 [INFO][3921] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:34.373294 containerd[1288]: 2024-06-25 16:33:34.192 [INFO][3921] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" HandleID="k8s-pod-network.db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.196 [INFO][3893] k8s.go 386: Populated endpoint ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--vschb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b863dc49-acd3-403d-a912-7a94220388dd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-vschb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideb3e38e492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.196 [INFO][3893] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.196 [INFO][3893] dataplane_linux.go 68: Setting the host side veth name to calideb3e38e492 ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.204 [INFO][3893] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.212 [INFO][3893] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--vschb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b863dc49-acd3-403d-a912-7a94220388dd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715", Pod:"coredns-5dd5756b68-vschb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideb3e38e492", MAC:"4e:52:cb:26:74:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.374116 containerd[1288]: 2024-06-25 16:33:34.361 [INFO][3893] k8s.go 500: Wrote updated endpoint to datastore ContainerID="db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715" Namespace="kube-system" Pod="coredns-5dd5756b68-vschb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:33:34.422882 kernel: kauditd_printk_skb: 81 callbacks suppressed Jun 25 16:33:34.423028 kernel: audit: type=1325 audit(1719333214.420:579): table=filter:101 family=2 entries=34 op=nft_register_chain pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:34.420000 audit[3977]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:34.420000 audit[3977]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffcf1a55da0 a2=0 a3=7ffcf1a55d8c items=0 ppid=3654 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.449168 kernel: audit: type=1300 audit(1719333214.420:579): arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffcf1a55da0 a2=0 a3=7ffcf1a55d8c items=0 ppid=3654 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.420000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:34.468715 kernel: audit: type=1327 audit(1719333214.420:579): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:34.536101 containerd[1288]: time="2024-06-25T16:33:34.534909145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:34.536101 containerd[1288]: time="2024-06-25T16:33:34.535110568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.536101 containerd[1288]: time="2024-06-25T16:33:34.535165052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:34.536101 containerd[1288]: time="2024-06-25T16:33:34.535198365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.568677 systemd-networkd[1111]: cali843439c21e9: Link UP Jun 25 16:33:34.574385 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali843439c21e9: link becomes ready Jun 25 16:33:34.573067 systemd-networkd[1111]: cali843439c21e9: Gained carrier Jun 25 16:33:34.633249 systemd[1]: Started cri-containerd-db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715.scope - libcontainer container db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715. Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.041 [INFO][3908] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0 calico-kube-controllers-7dfd458b6c- calico-system 18658f3c-a24d-421b-be97-f9cb52930d97 901 0 2024-06-25 16:32:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dfd458b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dfd458b6c-tdlbz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali843439c21e9 [] []}} ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.041 [INFO][3908] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.165 [INFO][3943] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" HandleID="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.382 [INFO][3943] ipam_plugin.go 264: Auto assigning IP ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" HandleID="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dfd458b6c-tdlbz", "timestamp":"2024-06-25 16:33:34.158528686 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.382 [INFO][3943] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.382 [INFO][3943] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.382 [INFO][3943] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.385 [INFO][3943] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.413 [INFO][3943] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.437 [INFO][3943] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.452 [INFO][3943] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.485 [INFO][3943] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.485 [INFO][3943] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.489 [INFO][3943] ipam.go 1685: Creating new handle: k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62 Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.520 [INFO][3943] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.540 [INFO][3943] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.541 [INFO][3943] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" host="localhost" Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.541 [INFO][3943] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:34.651914 containerd[1288]: 2024-06-25 16:33:34.541 [INFO][3943] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" HandleID="k8s-pod-network.8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.551 [INFO][3908] k8s.go 386: Populated endpoint ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0", GenerateName:"calico-kube-controllers-7dfd458b6c-", Namespace:"calico-system", SelfLink:"", UID:"18658f3c-a24d-421b-be97-f9cb52930d97", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd458b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dfd458b6c-tdlbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali843439c21e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.551 [INFO][3908] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.551 [INFO][3908] dataplane_linux.go 68: Setting the host side veth name to cali843439c21e9 ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.584 [INFO][3908] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.587 [INFO][3908] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0", GenerateName:"calico-kube-controllers-7dfd458b6c-", Namespace:"calico-system", SelfLink:"", UID:"18658f3c-a24d-421b-be97-f9cb52930d97", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd458b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62", Pod:"calico-kube-controllers-7dfd458b6c-tdlbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali843439c21e9", MAC:"96:4b:d6:51:35:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.658987 containerd[1288]: 2024-06-25 16:33:34.636 [INFO][3908] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62" Namespace="calico-system" Pod="calico-kube-controllers-7dfd458b6c-tdlbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:33:34.684000 audit[4018]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4018 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:34.684000 audit[4018]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffed3f91120 a2=0 a3=7ffed3f9110c items=0 ppid=3654 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.701304 kernel: audit: type=1325 audit(1719333214.684:580): table=filter:102 family=2 entries=38 op=nft_register_chain pid=4018 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:34.701460 kernel: audit: type=1300 audit(1719333214.684:580): arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffed3f91120 a2=0 a3=7ffed3f9110c items=0 ppid=3654 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.701489 kernel: audit: type=1327 audit(1719333214.684:580): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:34.701512 kernel: audit: type=1334 audit(1719333214.688:581): prog-id=143 op=LOAD Jun 25 16:33:34.684000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:34.688000 audit: BPF prog-id=143 op=LOAD Jun 25 16:33:34.732983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2c412db749b: link becomes ready Jun 25 16:33:34.733088 kernel: audit: type=1334 audit(1719333214.689:582): prog-id=144 op=LOAD Jun 25 16:33:34.689000 audit: BPF prog-id=144 op=LOAD Jun 25 16:33:34.722782 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:33:34.731999 systemd-networkd[1111]: cali2c412db749b: Link UP Jun 25 16:33:34.732159 systemd-networkd[1111]: cali2c412db749b: Gained carrier Jun 25 16:33:34.738680 kernel: audit: type=1300 audit(1719333214.689:582): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.689000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326466336539326433353031663438633834666439393461356362 Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.121 [INFO][3928] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7xkz9-eth0 csi-node-driver- calico-system 72bf43a2-ad8b-409f-8c68-9b745ebeb647 911 0 2024-06-25 16:32:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-7xkz9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali2c412db749b [] []}} ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.121 [INFO][3928] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.239 [INFO][3953] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" HandleID="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.384 [INFO][3953] ipam_plugin.go 264: Auto assigning IP ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" HandleID="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f08f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7xkz9", "timestamp":"2024-06-25 16:33:34.239266276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.384 [INFO][3953] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.547 [INFO][3953] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.547 [INFO][3953] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.558 [INFO][3953] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.589 [INFO][3953] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.642 [INFO][3953] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.648 [INFO][3953] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.654 [INFO][3953] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.655 [INFO][3953] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.669 [INFO][3953] ipam.go 1685: Creating new handle: k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245 Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.681 [INFO][3953] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.702 [INFO][3953] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.702 [INFO][3953] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" host="localhost" Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.702 [INFO][3953] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:34.744217 containerd[1288]: 2024-06-25 16:33:34.702 [INFO][3953] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" HandleID="k8s-pod-network.56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.709 [INFO][3928] k8s.go 386: Populated endpoint ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xkz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72bf43a2-ad8b-409f-8c68-9b745ebeb647", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7xkz9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2c412db749b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.710 [INFO][3928] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.710 [INFO][3928] dataplane_linux.go 68: Setting the host side veth name to cali2c412db749b ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.711 [INFO][3928] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.711 [INFO][3928] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xkz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72bf43a2-ad8b-409f-8c68-9b745ebeb647", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245", Pod:"csi-node-driver-7xkz9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2c412db749b", MAC:"86:0f:01:62:9d:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:34.744891 containerd[1288]: 2024-06-25 16:33:34.721 [INFO][3928] k8s.go 500: Wrote updated endpoint to datastore ContainerID="56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245" Namespace="calico-system" Pod="csi-node-driver-7xkz9" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:33:34.751903 kernel: audit: type=1327 audit(1719333214.689:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326466336539326433353031663438633834666439393461356362 Jun 25 16:33:34.689000 audit: BPF prog-id=145 op=LOAD Jun 25 16:33:34.689000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326466336539326433353031663438633834666439393461356362 Jun 25 16:33:34.689000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:33:34.689000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:33:34.689000 audit: BPF prog-id=146 op=LOAD Jun 25 16:33:34.689000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326466336539326433353031663438633834666439393461356362 Jun 25 16:33:34.763000 audit[4040]: NETFILTER_CFG table=filter:103 family=2 entries=38 op=nft_register_chain pid=4040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:34.763000 audit[4040]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffd39183c70 a2=0 a3=7ffd39183c5c items=0 ppid=3654 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.763000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:34.817613 containerd[1288]: time="2024-06-25T16:33:34.817225220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:34.817613 containerd[1288]: time="2024-06-25T16:33:34.817281076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.817613 containerd[1288]: time="2024-06-25T16:33:34.817298880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:34.817613 containerd[1288]: time="2024-06-25T16:33:34.817311884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.825089 containerd[1288]: time="2024-06-25T16:33:34.824042264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vschb,Uid:b863dc49-acd3-403d-a912-7a94220388dd,Namespace:kube-system,Attempt:1,} returns sandbox id \"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715\"" Jun 25 16:33:34.827953 kubelet[2286]: E0625 16:33:34.827686 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:34.834471 containerd[1288]: time="2024-06-25T16:33:34.829154624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:34.834471 containerd[1288]: time="2024-06-25T16:33:34.829230728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.834471 containerd[1288]: time="2024-06-25T16:33:34.829249975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:34.834471 containerd[1288]: time="2024-06-25T16:33:34.829261437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:34.836580 containerd[1288]: time="2024-06-25T16:33:34.836541973Z" level=info msg="CreateContainer within sandbox \"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:33:34.864008 systemd[1]: Started cri-containerd-56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245.scope - libcontainer container 56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245. Jun 25 16:33:34.898040 systemd[1]: Started cri-containerd-8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62.scope - libcontainer container 8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62. Jun 25 16:33:34.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:56384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:34.904132 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:56384.service - OpenSSH per-connection server daemon (10.0.0.1:56384). Jun 25 16:33:34.937000 audit: BPF prog-id=147 op=LOAD Jun 25 16:33:34.938000 audit: BPF prog-id=148 op=LOAD Jun 25 16:33:34.938000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4065 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633535366565346635306132616132383962663938336662393438 Jun 25 16:33:34.938000 audit: BPF prog-id=149 op=LOAD Jun 25 16:33:34.938000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4065 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633535366565346635306132616132383962663938336662393438 Jun 25 16:33:34.938000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:33:34.938000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:33:34.938000 audit: BPF prog-id=150 op=LOAD Jun 25 16:33:34.938000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4065 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633535366565346635306132616132383962663938336662393438 Jun 25 16:33:34.942000 audit: BPF prog-id=151 op=LOAD Jun 25 16:33:34.942000 audit: BPF prog-id=152 op=LOAD Jun 25 16:33:34.942000 audit[4085]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4063 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303934353535303666343630393538656630626262626636373931 Jun 25 16:33:34.942000 audit: BPF prog-id=153 op=LOAD Jun 25 16:33:34.942000 audit[4085]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4063 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303934353535303666343630393538656630626262626636373931 Jun 25 16:33:34.942000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:33:34.942000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:33:34.942000 audit: BPF prog-id=154 op=LOAD Jun 25 16:33:34.942000 audit[4085]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4063 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:34.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303934353535303666343630393538656630626262626636373931 Jun 25 16:33:34.946030 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:33:34.956575 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:33:34.968274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904117422.mount: Deactivated successfully. Jun 25 16:33:35.029022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508384857.mount: Deactivated successfully. Jun 25 16:33:35.044867 containerd[1288]: time="2024-06-25T16:33:35.043779900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xkz9,Uid:72bf43a2-ad8b-409f-8c68-9b745ebeb647,Namespace:calico-system,Attempt:1,} returns sandbox id \"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245\"" Jun 25 16:33:35.054253 containerd[1288]: time="2024-06-25T16:33:35.053240038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:33:35.061000 audit[4112]: USER_ACCT pid=4112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.063471 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 56384 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:35.063000 audit[4112]: CRED_ACQ pid=4112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.063000 audit[4112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe18fe0210 a2=3 a3=7ff6f2cbc480 items=0 ppid=1 pid=4112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.063000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:35.066273 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:35.078597 containerd[1288]: time="2024-06-25T16:33:35.076615119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd458b6c-tdlbz,Uid:18658f3c-a24d-421b-be97-f9cb52930d97,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62\"" Jun 25 16:33:35.091643 systemd-logind[1274]: New session 12 of user core. Jun 25 16:33:35.096120 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:33:35.120041 containerd[1288]: time="2024-06-25T16:33:35.116516492Z" level=info msg="CreateContainer within sandbox \"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96a3cdf3e2dd94b3279c082f18873d779198989d23948fa011bdbe44cb253920\"" Jun 25 16:33:35.120041 containerd[1288]: time="2024-06-25T16:33:35.117668123Z" level=info msg="StartContainer for \"96a3cdf3e2dd94b3279c082f18873d779198989d23948fa011bdbe44cb253920\"" Jun 25 16:33:35.137315 containerd[1288]: time="2024-06-25T16:33:35.136365184Z" level=info msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" Jun 25 16:33:35.142000 audit[4112]: USER_START pid=4112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.151000 audit[4134]: CRED_ACQ pid=4134 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.227496 systemd[1]: Started cri-containerd-96a3cdf3e2dd94b3279c082f18873d779198989d23948fa011bdbe44cb253920.scope - libcontainer container 96a3cdf3e2dd94b3279c082f18873d779198989d23948fa011bdbe44cb253920. Jun 25 16:33:35.274000 audit: BPF prog-id=155 op=LOAD Jun 25 16:33:35.278000 audit: BPF prog-id=156 op=LOAD Jun 25 16:33:35.278000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3986 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613363646633653264643934623332373963303832663138383733 Jun 25 16:33:35.278000 audit: BPF prog-id=157 op=LOAD Jun 25 16:33:35.278000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3986 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613363646633653264643934623332373963303832663138383733 Jun 25 16:33:35.279000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:33:35.279000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:33:35.279000 audit: BPF prog-id=158 op=LOAD Jun 25 16:33:35.279000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3986 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613363646633653264643934623332373963303832663138383733 Jun 25 16:33:35.403657 containerd[1288]: time="2024-06-25T16:33:35.403490525Z" level=info msg="StartContainer for \"96a3cdf3e2dd94b3279c082f18873d779198989d23948fa011bdbe44cb253920\" returns successfully" Jun 25 16:33:35.431565 sshd[4112]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:35.432000 audit[4112]: USER_END pid=4112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.432000 audit[4112]: CRED_DISP pid=4112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:35.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:56384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:35.436069 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:56384.service: Deactivated successfully. Jun 25 16:33:35.437110 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:33:35.437913 systemd-logind[1274]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:33:35.441655 systemd-logind[1274]: Removed session 12. Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.371 [INFO][4158] k8s.go 608: Cleaning up netns ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.371 [INFO][4158] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" iface="eth0" netns="/var/run/netns/cni-7183096f-c6fb-e333-b529-4214ceb6c2f0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.371 [INFO][4158] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" iface="eth0" netns="/var/run/netns/cni-7183096f-c6fb-e333-b529-4214ceb6c2f0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.372 [INFO][4158] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" iface="eth0" netns="/var/run/netns/cni-7183096f-c6fb-e333-b529-4214ceb6c2f0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.372 [INFO][4158] k8s.go 615: Releasing IP address(es) ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.372 [INFO][4158] utils.go 188: Calico CNI releasing IP address ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.414 [INFO][4197] ipam_plugin.go 411: Releasing address using handleID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.414 [INFO][4197] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.414 [INFO][4197] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.433 [WARNING][4197] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.433 [INFO][4197] ipam_plugin.go 439: Releasing address using workloadID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.440 [INFO][4197] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:35.444646 containerd[1288]: 2024-06-25 16:33:35.442 [INFO][4158] k8s.go 621: Teardown processing complete. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:33:35.445812 containerd[1288]: time="2024-06-25T16:33:35.445703536Z" level=info msg="TearDown network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" successfully" Jun 25 16:33:35.446125 containerd[1288]: time="2024-06-25T16:33:35.446106573Z" level=info msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" returns successfully" Jun 25 16:33:35.447205 kubelet[2286]: E0625 16:33:35.446538 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:35.451557 containerd[1288]: time="2024-06-25T16:33:35.451510965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p68b4,Uid:1ecf1669-3c1d-4bb9-be93-082a2bca0c94,Namespace:kube-system,Attempt:1,}" Jun 25 16:33:35.610954 systemd-networkd[1111]: calideb3e38e492: Gained IPv6LL Jun 25 16:33:35.689083 systemd-networkd[1111]: calia3d2f637112: Link UP Jun 25 16:33:35.692420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:33:35.692504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia3d2f637112: link becomes ready Jun 25 16:33:35.692658 systemd-networkd[1111]: calia3d2f637112: Gained carrier Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.572 [INFO][4214] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--p68b4-eth0 coredns-5dd5756b68- kube-system 1ecf1669-3c1d-4bb9-be93-082a2bca0c94 932 0 2024-06-25 16:32:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-p68b4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3d2f637112 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.572 [INFO][4214] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.616 [INFO][4227] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" HandleID="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.629 [INFO][4227] ipam_plugin.go 264: Auto assigning IP ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" HandleID="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000368220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-p68b4", "timestamp":"2024-06-25 16:33:35.616520609 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.629 [INFO][4227] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.630 [INFO][4227] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.630 [INFO][4227] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.633 [INFO][4227] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.642 [INFO][4227] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.654 [INFO][4227] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.659 [INFO][4227] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.663 [INFO][4227] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.664 [INFO][4227] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.666 [INFO][4227] ipam.go 1685: Creating new handle: k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.675 [INFO][4227] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.682 [INFO][4227] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.682 [INFO][4227] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" host="localhost" Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.682 [INFO][4227] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:33:35.721889 containerd[1288]: 2024-06-25 16:33:35.682 [INFO][4227] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" HandleID="k8s-pod-network.613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.685 [INFO][4214] k8s.go 386: Populated endpoint ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p68b4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1ecf1669-3c1d-4bb9-be93-082a2bca0c94", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-p68b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3d2f637112", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.685 [INFO][4214] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.685 [INFO][4214] dataplane_linux.go 68: Setting the host side veth name to calia3d2f637112 ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.699 [INFO][4214] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.700 [INFO][4214] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p68b4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1ecf1669-3c1d-4bb9-be93-082a2bca0c94", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c", Pod:"coredns-5dd5756b68-p68b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3d2f637112", MAC:"6e:d1:36:fa:92:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:33:35.723178 containerd[1288]: 2024-06-25 16:33:35.713 [INFO][4214] k8s.go 500: Wrote updated endpoint to datastore ContainerID="613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c" Namespace="kube-system" Pod="coredns-5dd5756b68-p68b4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:33:35.735000 audit[4255]: NETFILTER_CFG table=filter:104 family=2 entries=38 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:33:35.735000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffec40c7e90 a2=0 a3=7ffec40c7e7c items=0 ppid=3654 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.735000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:33:35.749941 containerd[1288]: time="2024-06-25T16:33:35.749321204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:35.749941 containerd[1288]: time="2024-06-25T16:33:35.749801839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:35.749941 containerd[1288]: time="2024-06-25T16:33:35.749825243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:35.749941 containerd[1288]: time="2024-06-25T16:33:35.749837777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:35.789083 systemd[1]: Started cri-containerd-613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c.scope - libcontainer container 613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c. Jun 25 16:33:35.812000 audit: BPF prog-id=159 op=LOAD Jun 25 16:33:35.813000 audit: BPF prog-id=160 op=LOAD Jun 25 16:33:35.813000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4263 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336332313230313263333464396632363836353034313833303531 Jun 25 16:33:35.813000 audit: BPF prog-id=161 op=LOAD Jun 25 16:33:35.813000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4263 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336332313230313263333464396632363836353034313833303531 Jun 25 16:33:35.813000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:33:35.813000 audit: BPF prog-id=160 op=UNLOAD Jun 25 16:33:35.813000 audit: BPF prog-id=162 op=LOAD Jun 25 16:33:35.813000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4263 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:35.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336332313230313263333464396632363836353034313833303531 Jun 25 16:33:35.815841 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:33:35.865619 containerd[1288]: time="2024-06-25T16:33:35.865523321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p68b4,Uid:1ecf1669-3c1d-4bb9-be93-082a2bca0c94,Namespace:kube-system,Attempt:1,} returns sandbox id \"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c\"" Jun 25 16:33:35.867735 kubelet[2286]: E0625 16:33:35.867693 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:35.873784 containerd[1288]: time="2024-06-25T16:33:35.872660960Z" level=info msg="CreateContainer within sandbox \"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:33:35.913892 systemd[1]: run-netns-cni\x2d7183096f\x2dc6fb\x2de333\x2db529\x2d4214ceb6c2f0.mount: Deactivated successfully. Jun 25 16:33:35.932038 systemd-networkd[1111]: cali2c412db749b: Gained IPv6LL Jun 25 16:33:35.999990 systemd-networkd[1111]: cali843439c21e9: Gained IPv6LL Jun 25 16:33:36.218840 kubelet[2286]: E0625 16:33:36.218562 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:36.258585 kubelet[2286]: I0625 16:33:36.258292 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vschb" podStartSLOduration=57.258245393 podCreationTimestamp="2024-06-25 16:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:33:36.257220503 +0000 UTC m=+70.257500016" watchObservedRunningTime="2024-06-25 16:33:36.258245393 +0000 UTC m=+70.258524885" Jun 25 16:33:36.298368 containerd[1288]: time="2024-06-25T16:33:36.298282438Z" level=info msg="CreateContainer within sandbox \"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5510cad036a8843c1ee638248cce15d19124706beba0f65f7ef71a6b45133eb6\"" Jun 25 16:33:36.302322 containerd[1288]: time="2024-06-25T16:33:36.302266434Z" level=info msg="StartContainer for \"5510cad036a8843c1ee638248cce15d19124706beba0f65f7ef71a6b45133eb6\"" Jun 25 16:33:36.401826 systemd[1]: Started cri-containerd-5510cad036a8843c1ee638248cce15d19124706beba0f65f7ef71a6b45133eb6.scope - libcontainer container 5510cad036a8843c1ee638248cce15d19124706beba0f65f7ef71a6b45133eb6. Jun 25 16:33:36.404000 audit[4314]: NETFILTER_CFG table=filter:105 family=2 entries=11 op=nft_register_rule pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:36.404000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff0f559990 a2=0 a3=7fff0f55997c items=0 ppid=2478 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:36.407000 audit[4314]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:36.407000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff0f559990 a2=0 a3=7fff0f55997c items=0 ppid=2478 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:36.421000 audit: BPF prog-id=163 op=LOAD Jun 25 16:33:36.421000 audit: BPF prog-id=164 op=LOAD Jun 25 16:33:36.421000 audit[4305]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4263 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535313063616430333661383834336331656536333832343863636531 Jun 25 16:33:36.421000 audit: BPF prog-id=165 op=LOAD Jun 25 16:33:36.421000 audit[4305]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4263 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535313063616430333661383834336331656536333832343863636531 Jun 25 16:33:36.421000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:33:36.421000 audit: BPF prog-id=164 op=UNLOAD Jun 25 16:33:36.421000 audit: BPF prog-id=166 op=LOAD Jun 25 16:33:36.421000 audit[4305]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4263 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535313063616430333661383834336331656536333832343863636531 Jun 25 16:33:36.463425 containerd[1288]: time="2024-06-25T16:33:36.463368767Z" level=info msg="StartContainer for \"5510cad036a8843c1ee638248cce15d19124706beba0f65f7ef71a6b45133eb6\" returns successfully" Jun 25 16:33:36.479000 audit[4337]: NETFILTER_CFG table=filter:107 family=2 entries=8 op=nft_register_rule pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:36.479000 audit[4337]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd521330a0 a2=0 a3=7ffd5213308c items=0 ppid=2478 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:36.484000 audit[4337]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:36.484000 audit[4337]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd521330a0 a2=0 a3=7ffd5213308c items=0 ppid=2478 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:36.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:37.225117 kubelet[2286]: E0625 16:33:37.225070 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:37.225584 kubelet[2286]: E0625 16:33:37.225070 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:37.347000 audit[4351]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:37.347000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd28f1f290 a2=0 a3=7ffd28f1f27c items=0 ppid=2478 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:37.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:37.349000 audit[4351]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:37.349000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd28f1f290 a2=0 a3=7ffd28f1f27c items=0 ppid=2478 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:37.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:37.467223 systemd-networkd[1111]: calia3d2f637112: Gained IPv6LL Jun 25 16:33:37.624226 containerd[1288]: time="2024-06-25T16:33:37.623635958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:37.648948 containerd[1288]: time="2024-06-25T16:33:37.648734366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:33:37.667724 containerd[1288]: time="2024-06-25T16:33:37.667467121Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:37.721050 containerd[1288]: time="2024-06-25T16:33:37.720236516Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:37.735696 containerd[1288]: time="2024-06-25T16:33:37.735633319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:37.737030 containerd[1288]: time="2024-06-25T16:33:37.736981893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.683669208s" Jun 25 16:33:37.737141 containerd[1288]: time="2024-06-25T16:33:37.737118212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:33:37.747792 containerd[1288]: time="2024-06-25T16:33:37.747406413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:33:37.747792 containerd[1288]: time="2024-06-25T16:33:37.747469512Z" level=info msg="CreateContainer within sandbox \"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:33:38.041664 containerd[1288]: time="2024-06-25T16:33:38.041539667Z" level=info msg="CreateContainer within sandbox \"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a0253489060a7840203edbd14f08450ed90f6cdb280a4ea09c77253f7a3d483d\"" Jun 25 16:33:38.044545 containerd[1288]: time="2024-06-25T16:33:38.042566058Z" level=info msg="StartContainer for \"a0253489060a7840203edbd14f08450ed90f6cdb280a4ea09c77253f7a3d483d\"" Jun 25 16:33:38.124065 systemd[1]: Started cri-containerd-a0253489060a7840203edbd14f08450ed90f6cdb280a4ea09c77253f7a3d483d.scope - libcontainer container a0253489060a7840203edbd14f08450ed90f6cdb280a4ea09c77253f7a3d483d. Jun 25 16:33:38.175000 audit: BPF prog-id=167 op=LOAD Jun 25 16:33:38.175000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4065 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:38.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323533343839303630613738343032303365646264313466303834 Jun 25 16:33:38.175000 audit: BPF prog-id=168 op=LOAD Jun 25 16:33:38.175000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4065 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:38.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323533343839303630613738343032303365646264313466303834 Jun 25 16:33:38.175000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:33:38.175000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:33:38.175000 audit: BPF prog-id=169 op=LOAD Jun 25 16:33:38.175000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4065 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:38.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323533343839303630613738343032303365646264313466303834 Jun 25 16:33:38.253563 containerd[1288]: time="2024-06-25T16:33:38.249631234Z" level=info msg="StartContainer for \"a0253489060a7840203edbd14f08450ed90f6cdb280a4ea09c77253f7a3d483d\" returns successfully" Jun 25 16:33:38.260393 kubelet[2286]: E0625 16:33:38.256454 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:38.426673 kubelet[2286]: I0625 16:33:38.426541 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-p68b4" podStartSLOduration=59.426482746 podCreationTimestamp="2024-06-25 16:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:33:37.328606715 +0000 UTC m=+71.328886207" watchObservedRunningTime="2024-06-25 16:33:38.426482746 +0000 UTC m=+72.426762238" Jun 25 16:33:38.473000 audit[4387]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:38.473000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe43dbbe50 a2=0 a3=7ffe43dbbe3c items=0 ppid=2478 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:38.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:38.499000 audit[4387]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:38.499000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe43dbbe50 a2=0 a3=7ffe43dbbe3c items=0 ppid=2478 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:38.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:39.260690 kubelet[2286]: E0625 16:33:39.260289 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:40.262656 kubelet[2286]: E0625 16:33:40.262613 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:40.448213 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:56432.service - OpenSSH per-connection server daemon (10.0.0.1:56432). Jun 25 16:33:40.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:56432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:40.474068 kernel: kauditd_printk_skb: 120 callbacks suppressed Jun 25 16:33:40.474241 kernel: audit: type=1130 audit(1719333220.447:641): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:56432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:40.527000 audit[4390]: USER_ACCT pid=4390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.529052 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 56432 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:40.576880 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:40.530000 audit[4390]: CRED_ACQ pid=4390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.585549 kernel: audit: type=1101 audit(1719333220.527:642): pid=4390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.585738 kernel: audit: type=1103 audit(1719333220.530:643): pid=4390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.530000 audit[4390]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc2e65c00 a2=3 a3=7fd494feb480 items=0 ppid=1 pid=4390 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:40.588369 systemd-logind[1274]: New session 13 of user core. Jun 25 16:33:40.602423 kernel: audit: type=1006 audit(1719333220.530:644): pid=4390 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:33:40.602529 kernel: audit: type=1300 audit(1719333220.530:644): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc2e65c00 a2=3 a3=7fd494feb480 items=0 ppid=1 pid=4390 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:40.602562 kernel: audit: type=1327 audit(1719333220.530:644): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:40.530000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:40.602264 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:33:40.615000 audit[4390]: USER_START pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.622246 kernel: audit: type=1105 audit(1719333220.615:645): pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.625000 audit[4392]: CRED_ACQ pid=4392 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.630814 kernel: audit: type=1103 audit(1719333220.625:646): pid=4392 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.806698 sshd[4390]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:40.807000 audit[4390]: USER_END pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.810020 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:56432.service: Deactivated successfully. Jun 25 16:33:40.811089 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:33:40.811777 systemd-logind[1274]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:33:40.812584 systemd-logind[1274]: Removed session 13. Jun 25 16:33:40.807000 audit[4390]: CRED_DISP pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.827121 kernel: audit: type=1106 audit(1719333220.807:647): pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.827281 kernel: audit: type=1104 audit(1719333220.807:648): pid=4390 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:40.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:56432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:42.491623 containerd[1288]: time="2024-06-25T16:33:42.490407023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:42.496303 containerd[1288]: time="2024-06-25T16:33:42.496191754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:33:42.510721 containerd[1288]: time="2024-06-25T16:33:42.509991540Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:42.537503 containerd[1288]: time="2024-06-25T16:33:42.536782087Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:42.605781 containerd[1288]: time="2024-06-25T16:33:42.605614711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:42.607188 containerd[1288]: time="2024-06-25T16:33:42.607111191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.859644034s" Jun 25 16:33:42.607188 containerd[1288]: time="2024-06-25T16:33:42.607179892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:33:42.608397 containerd[1288]: time="2024-06-25T16:33:42.608290851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:33:42.626475 containerd[1288]: time="2024-06-25T16:33:42.626400638Z" level=info msg="CreateContainer within sandbox \"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:33:42.960779 containerd[1288]: time="2024-06-25T16:33:42.960559118Z" level=info msg="CreateContainer within sandbox \"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549\"" Jun 25 16:33:42.970049 containerd[1288]: time="2024-06-25T16:33:42.969714037Z" level=info msg="StartContainer for \"68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549\"" Jun 25 16:33:43.060317 systemd[1]: Started cri-containerd-68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549.scope - libcontainer container 68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549. Jun 25 16:33:43.084000 audit: BPF prog-id=170 op=LOAD Jun 25 16:33:43.084000 audit: BPF prog-id=171 op=LOAD Jun 25 16:33:43.084000 audit[4433]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4063 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633562323765353738666464323839323665393239383032333239 Jun 25 16:33:43.084000 audit: BPF prog-id=172 op=LOAD Jun 25 16:33:43.084000 audit[4433]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4063 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633562323765353738666464323839323665393239383032333239 Jun 25 16:33:43.084000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:33:43.084000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:33:43.084000 audit: BPF prog-id=173 op=LOAD Jun 25 16:33:43.084000 audit[4433]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4063 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633562323765353738666464323839323665393239383032333239 Jun 25 16:33:43.230142 containerd[1288]: time="2024-06-25T16:33:43.227305932Z" level=info msg="StartContainer for \"68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549\" returns successfully" Jun 25 16:33:43.350623 kubelet[2286]: I0625 16:33:43.350554 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dfd458b6c-tdlbz" podStartSLOduration=46.82038417 podCreationTimestamp="2024-06-25 16:32:49 +0000 UTC" firstStartedPulling="2024-06-25 16:33:35.078054388 +0000 UTC m=+69.078333880" lastFinishedPulling="2024-06-25 16:33:42.607843802 +0000 UTC m=+76.608123294" observedRunningTime="2024-06-25 16:33:43.349544931 +0000 UTC m=+77.349824423" watchObservedRunningTime="2024-06-25 16:33:43.350173584 +0000 UTC m=+77.350453096" Jun 25 16:33:45.043304 containerd[1288]: time="2024-06-25T16:33:45.043191146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:45.046915 containerd[1288]: time="2024-06-25T16:33:45.046792477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:33:45.052351 containerd[1288]: time="2024-06-25T16:33:45.052244469Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:45.057461 containerd[1288]: time="2024-06-25T16:33:45.057402592Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:45.058728 containerd[1288]: time="2024-06-25T16:33:45.058688562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:45.059795 containerd[1288]: time="2024-06-25T16:33:45.059698888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.451348374s" Jun 25 16:33:45.059795 containerd[1288]: time="2024-06-25T16:33:45.059770564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:33:45.062640 containerd[1288]: time="2024-06-25T16:33:45.062581315Z" level=info msg="CreateContainer within sandbox \"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:33:45.098109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365176435.mount: Deactivated successfully. Jun 25 16:33:45.110138 containerd[1288]: time="2024-06-25T16:33:45.109810176Z" level=info msg="CreateContainer within sandbox \"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0\"" Jun 25 16:33:45.122043 containerd[1288]: time="2024-06-25T16:33:45.112623232Z" level=info msg="StartContainer for \"a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0\"" Jun 25 16:33:45.154995 kubelet[2286]: E0625 16:33:45.149657 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:45.279404 systemd[1]: Started cri-containerd-a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0.scope - libcontainer container a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0. Jun 25 16:33:45.311000 audit: BPF prog-id=174 op=LOAD Jun 25 16:33:45.311000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4065 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:45.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353137376338616439366638393861363937343639353864623739 Jun 25 16:33:45.312000 audit: BPF prog-id=175 op=LOAD Jun 25 16:33:45.312000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4065 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:45.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353137376338616439366638393861363937343639353864623739 Jun 25 16:33:45.312000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:33:45.312000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:33:45.312000 audit: BPF prog-id=176 op=LOAD Jun 25 16:33:45.312000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4065 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:45.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353137376338616439366638393861363937343639353864623739 Jun 25 16:33:45.417142 containerd[1288]: time="2024-06-25T16:33:45.417050764Z" level=info msg="StartContainer for \"a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0\" returns successfully" Jun 25 16:33:45.856238 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 16:33:45.856435 kernel: audit: type=1130 audit(1719333225.842:661): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:56436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:45.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:56436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:45.847205 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). Jun 25 16:33:45.936000 audit[4530]: USER_ACCT pid=4530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:45.942992 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:45.944134 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:45.972847 kernel: audit: type=1101 audit(1719333225.936:662): pid=4530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:45.977352 kernel: audit: type=1103 audit(1719333225.941:663): pid=4530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:45.977416 kernel: audit: type=1006 audit(1719333225.941:664): pid=4530 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:33:45.977440 kernel: audit: type=1300 audit(1719333225.941:664): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff32f7bf60 a2=3 a3=7fefd925d480 items=0 ppid=1 pid=4530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:45.941000 audit[4530]: CRED_ACQ pid=4530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:45.941000 audit[4530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff32f7bf60 a2=3 a3=7fefd925d480 items=0 ppid=1 pid=4530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:45.974464 systemd-logind[1274]: New session 14 of user core. Jun 25 16:33:45.981915 kernel: audit: type=1327 audit(1719333225.941:664): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:45.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:45.981080 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:33:46.009000 audit[4530]: USER_START pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.021934 kernel: audit: type=1105 audit(1719333226.009:665): pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.022095 kernel: audit: type=1103 audit(1719333226.017:666): pid=4532 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.017000 audit[4532]: CRED_ACQ pid=4532 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.095145 systemd[1]: run-containerd-runc-k8s.io-a55177c8ad96f898a69746958db79e3b671611aaf2b0413b960e5a53ba42a1a0-runc.H7noNB.mount: Deactivated successfully. Jun 25 16:33:46.306538 sshd[4530]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:46.307000 audit[4530]: USER_END pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.307000 audit[4530]: CRED_DISP pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.316099 kernel: audit: type=1106 audit(1719333226.307:667): pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.316228 kernel: audit: type=1104 audit(1719333226.307:668): pid=4530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.322456 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:56436.service: Deactivated successfully. Jun 25 16:33:46.323303 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:33:46.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:56436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:46.325700 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:53638.service - OpenSSH per-connection server daemon (10.0.0.1:53638). Jun 25 16:33:46.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:53638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:46.326726 systemd-logind[1274]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:33:46.332166 systemd-logind[1274]: Removed session 14. Jun 25 16:33:46.362717 kubelet[2286]: I0625 16:33:46.361719 2286 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:33:46.362717 kubelet[2286]: I0625 16:33:46.361782 2286 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:33:46.369215 kubelet[2286]: I0625 16:33:46.368760 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7xkz9" podStartSLOduration=48.359749068 podCreationTimestamp="2024-06-25 16:32:48 +0000 UTC" firstStartedPulling="2024-06-25 16:33:35.051572835 +0000 UTC m=+69.051852328" lastFinishedPulling="2024-06-25 16:33:45.060428962 +0000 UTC m=+79.060708454" observedRunningTime="2024-06-25 16:33:46.364112996 +0000 UTC m=+80.364392488" watchObservedRunningTime="2024-06-25 16:33:46.368605194 +0000 UTC m=+80.368884687" Jun 25 16:33:46.368000 audit[4548]: USER_ACCT pid=4548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.369605 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 53638 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:46.370000 audit[4548]: CRED_ACQ pid=4548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.370000 audit[4548]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe583fa140 a2=3 a3=7f7b57ea6480 items=0 ppid=1 pid=4548 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:46.370000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:46.371358 sshd[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:46.385137 systemd-logind[1274]: New session 15 of user core. Jun 25 16:33:46.396201 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:33:46.422000 audit[4548]: USER_START pid=4548 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:46.424000 audit[4551]: CRED_ACQ pid=4551 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.275121 sshd[4548]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:48.275000 audit[4548]: USER_END pid=4548 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.276000 audit[4548]: CRED_DISP pid=4548 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.282849 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:53638.service: Deactivated successfully. Jun 25 16:33:48.283516 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:33:48.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:53638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:48.284371 systemd-logind[1274]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:33:48.293280 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:53640.service - OpenSSH per-connection server daemon (10.0.0.1:53640). Jun 25 16:33:48.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:53640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:48.295247 systemd-logind[1274]: Removed session 15. Jun 25 16:33:48.323000 audit[4564]: USER_ACCT pid=4564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.325019 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:48.325000 audit[4564]: CRED_ACQ pid=4564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.325000 audit[4564]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2111d440 a2=3 a3=7fc22d5eb480 items=0 ppid=1 pid=4564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:48.325000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:48.326195 sshd[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:48.332257 systemd-logind[1274]: New session 16 of user core. Jun 25 16:33:48.338924 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:33:48.342000 audit[4564]: USER_START pid=4564 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.343000 audit[4566]: CRED_ACQ pid=4566 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.584553 sshd[4564]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:48.586000 audit[4564]: USER_END pid=4564 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.586000 audit[4564]: CRED_DISP pid=4564 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:48.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:53640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:48.589690 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:53640.service: Deactivated successfully. Jun 25 16:33:48.591174 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:33:48.592499 systemd-logind[1274]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:33:48.595195 systemd-logind[1274]: Removed session 16. Jun 25 16:33:53.131963 kubelet[2286]: E0625 16:33:53.131500 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:33:53.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:53648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:53.638826 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:53648.service - OpenSSH per-connection server daemon (10.0.0.1:53648). Jun 25 16:33:53.647503 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:33:53.647621 kernel: audit: type=1130 audit(1719333233.638:688): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:53648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:53.683000 audit[4610]: USER_ACCT pid=4610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.685579 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 53648 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:53.686921 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:53.693514 systemd-logind[1274]: New session 17 of user core. Jun 25 16:33:53.733051 kernel: audit: type=1101 audit(1719333233.683:689): pid=4610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.733092 kernel: audit: type=1103 audit(1719333233.684:690): pid=4610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.733123 kernel: audit: type=1006 audit(1719333233.684:691): pid=4610 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:33:53.733159 kernel: audit: type=1300 audit(1719333233.684:691): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd49ead6a0 a2=3 a3=7f7b482da480 items=0 ppid=1 pid=4610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:53.733195 kernel: audit: type=1327 audit(1719333233.684:691): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:53.684000 audit[4610]: CRED_ACQ pid=4610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.684000 audit[4610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd49ead6a0 a2=3 a3=7f7b482da480 items=0 ppid=1 pid=4610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:53.684000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:53.732669 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:33:53.751000 audit[4610]: USER_START pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.752000 audit[4612]: CRED_ACQ pid=4612 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.796254 kernel: audit: type=1105 audit(1719333233.751:692): pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.796386 kernel: audit: type=1103 audit(1719333233.752:693): pid=4612 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.953035 sshd[4610]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:53.955000 audit[4610]: USER_END pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.959074 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:53648.service: Deactivated successfully. Jun 25 16:33:53.960065 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:33:53.962106 systemd-logind[1274]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:33:53.963212 systemd-logind[1274]: Removed session 17. Jun 25 16:33:53.955000 audit[4610]: CRED_DISP pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:54.008254 kernel: audit: type=1106 audit(1719333233.955:694): pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:54.008434 kernel: audit: type=1104 audit(1719333233.955:695): pid=4610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:53.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:53648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:58.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:53052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:58.986803 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:58.986845 kernel: audit: type=1130 audit(1719333238.979:697): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:53052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:58.980622 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). Jun 25 16:33:59.042000 audit[4629]: USER_ACCT pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.044916 sshd[4629]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:33:59.046813 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:59.066837 kernel: audit: type=1101 audit(1719333239.042:698): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.045000 audit[4629]: CRED_ACQ pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.072777 kernel: audit: type=1103 audit(1719333239.045:699): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.072875 kernel: audit: type=1006 audit(1719333239.045:700): pid=4629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:33:59.074071 systemd-logind[1274]: New session 18 of user core. Jun 25 16:33:59.102498 kernel: audit: type=1300 audit(1719333239.045:700): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9cd722f0 a2=3 a3=7f08fd789480 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:59.102549 kernel: audit: type=1327 audit(1719333239.045:700): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:59.045000 audit[4629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9cd722f0 a2=3 a3=7f08fd789480 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:59.045000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:59.101638 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:33:59.126000 audit[4629]: USER_START pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.135228 kernel: audit: type=1105 audit(1719333239.126:701): pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.135367 kernel: audit: type=1103 audit(1719333239.127:702): pid=4631 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.127000 audit[4631]: CRED_ACQ pid=4631 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.383854 sshd[4629]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:59.394305 kernel: audit: type=1106 audit(1719333239.382:703): pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.382000 audit[4629]: USER_END pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.389000 audit[4629]: CRED_DISP pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.396850 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:53052.service: Deactivated successfully. Jun 25 16:33:59.398158 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:33:59.400098 systemd-logind[1274]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:33:59.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:53052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:59.404919 kernel: audit: type=1104 audit(1719333239.389:704): pid=4629 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:33:59.406344 systemd-logind[1274]: Removed session 18. Jun 25 16:34:03.132495 kubelet[2286]: E0625 16:34:03.132437 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:34:04.400922 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). Jun 25 16:34:04.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:53062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:04.404408 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:04.404527 kernel: audit: type=1130 audit(1719333244.399:706): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:53062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:04.441000 audit[4668]: USER_ACCT pid=4668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.442234 sshd[4668]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:04.443959 sshd[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:04.442000 audit[4668]: CRED_ACQ pid=4668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.456541 systemd-logind[1274]: New session 19 of user core. Jun 25 16:34:04.460720 kernel: audit: type=1101 audit(1719333244.441:707): pid=4668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.460868 kernel: audit: type=1103 audit(1719333244.442:708): pid=4668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.460916 kernel: audit: type=1006 audit(1719333244.442:709): pid=4668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:34:04.463032 kernel: audit: type=1300 audit(1719333244.442:709): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf4ec11a0 a2=3 a3=7f0a6f0a2480 items=0 ppid=1 pid=4668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:04.442000 audit[4668]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf4ec11a0 a2=3 a3=7f0a6f0a2480 items=0 ppid=1 pid=4668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:04.442000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:04.469554 kernel: audit: type=1327 audit(1719333244.442:709): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:04.472150 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:34:04.499000 audit[4668]: USER_START pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.505048 kernel: audit: type=1105 audit(1719333244.499:710): pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.505175 kernel: audit: type=1103 audit(1719333244.501:711): pid=4670 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.501000 audit[4670]: CRED_ACQ pid=4670 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.640175 sshd[4668]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:04.644000 audit[4668]: USER_END pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.647021 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:53062.service: Deactivated successfully. Jun 25 16:34:04.647993 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:34:04.649181 systemd-logind[1274]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:34:04.650536 systemd-logind[1274]: Removed session 19. Jun 25 16:34:04.644000 audit[4668]: CRED_DISP pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.668422 kernel: audit: type=1106 audit(1719333244.644:712): pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.668618 kernel: audit: type=1104 audit(1719333244.644:713): pid=4668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:04.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:53062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:08.132239 kubelet[2286]: E0625 16:34:08.132195 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:34:09.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:51610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:09.657780 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:51610.service - OpenSSH per-connection server daemon (10.0.0.1:51610). Jun 25 16:34:09.669005 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:09.669139 kernel: audit: type=1130 audit(1719333249.657:715): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:51610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:09.712000 audit[4682]: USER_ACCT pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.713880 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 51610 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:09.715254 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:09.723839 systemd-logind[1274]: New session 20 of user core. Jun 25 16:34:09.766771 kernel: audit: type=1101 audit(1719333249.712:716): pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.766825 kernel: audit: type=1103 audit(1719333249.712:717): pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.766863 kernel: audit: type=1006 audit(1719333249.712:718): pid=4682 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 16:34:09.766894 kernel: audit: type=1300 audit(1719333249.712:718): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9022f0c0 a2=3 a3=7f30ba480480 items=0 ppid=1 pid=4682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:09.766920 kernel: audit: type=1327 audit(1719333249.712:718): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:09.712000 audit[4682]: CRED_ACQ pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.712000 audit[4682]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9022f0c0 a2=3 a3=7f30ba480480 items=0 ppid=1 pid=4682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:09.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:09.767449 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:34:09.792000 audit[4682]: USER_START pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.794000 audit[4684]: CRED_ACQ pid=4684 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.802422 kernel: audit: type=1105 audit(1719333249.792:719): pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.802539 kernel: audit: type=1103 audit(1719333249.794:720): pid=4684 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.978077 sshd[4682]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:09.979000 audit[4682]: USER_END pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.981808 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:51610.service: Deactivated successfully. Jun 25 16:34:09.982891 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:34:09.984655 systemd-logind[1274]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:34:09.979000 audit[4682]: CRED_DISP pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.985690 systemd-logind[1274]: Removed session 20. Jun 25 16:34:10.005565 kernel: audit: type=1106 audit(1719333249.979:721): pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:10.005845 kernel: audit: type=1104 audit(1719333249.979:722): pid=4682 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:09.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:51610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:14.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.149:22-10.0.0.1:51624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:14.988608 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:51624.service - OpenSSH per-connection server daemon (10.0.0.1:51624). Jun 25 16:34:14.999783 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:14.999946 kernel: audit: type=1130 audit(1719333254.987:724): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.149:22-10.0.0.1:51624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:15.022346 sshd[4708]: Accepted publickey for core from 10.0.0.1 port 51624 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:15.021000 audit[4708]: USER_ACCT pid=4708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.024162 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:15.023000 audit[4708]: CRED_ACQ pid=4708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.030180 kernel: audit: type=1101 audit(1719333255.021:725): pid=4708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.030302 kernel: audit: type=1103 audit(1719333255.023:726): pid=4708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.030326 kernel: audit: type=1006 audit(1719333255.023:727): pid=4708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 16:34:15.031947 kernel: audit: type=1300 audit(1719333255.023:727): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf8cf1000 a2=3 a3=7f94354cd480 items=0 ppid=1 pid=4708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:15.023000 audit[4708]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf8cf1000 a2=3 a3=7f94354cd480 items=0 ppid=1 pid=4708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:15.041605 kernel: audit: type=1327 audit(1719333255.023:727): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:15.023000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:15.041908 systemd-logind[1274]: New session 21 of user core. Jun 25 16:34:15.051035 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:34:15.074000 audit[4708]: USER_START pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.080000 audit[4710]: CRED_ACQ pid=4710 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.129210 kernel: audit: type=1105 audit(1719333255.074:728): pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.129395 kernel: audit: type=1103 audit(1719333255.080:729): pid=4710 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.486657 sshd[4708]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:15.488000 audit[4708]: USER_END pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.490920 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:51624.service: Deactivated successfully. Jun 25 16:34:15.492344 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:34:15.494219 systemd-logind[1274]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:34:15.488000 audit[4708]: CRED_DISP pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.514176 kernel: audit: type=1106 audit(1719333255.488:730): pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.514323 kernel: audit: type=1104 audit(1719333255.488:731): pid=4708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:15.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.149:22-10.0.0.1:51624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:15.520043 systemd-logind[1274]: Removed session 21. Jun 25 16:34:18.866091 systemd[1]: run-containerd-runc-k8s.io-68c5b27e578fdd28926e9298023293c06316d3e56cc5968a93082eca389d1549-runc.uPNe5D.mount: Deactivated successfully. Jun 25 16:34:20.244520 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:20.244690 kernel: audit: type=1400 audit(1719333260.240:733): avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.240000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.240000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0023e2690 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:20.301538 kernel: audit: type=1300 audit(1719333260.240:733): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0023e2690 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:20.301721 kernel: audit: type=1327 audit(1719333260.240:733): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:20.240000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:20.310435 kernel: audit: type=1400 audit(1719333260.244:734): avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.244000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.244000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028f18c0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:20.320488 kernel: audit: type=1300 audit(1719333260.244:734): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028f18c0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:20.320646 kernel: audit: type=1327 audit(1719333260.244:734): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:20.244000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:20.483525 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:48834.service - OpenSSH per-connection server daemon (10.0.0.1:48834). Jun 25 16:34:20.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.149:22-10.0.0.1:48834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:20.496246 kernel: audit: type=1130 audit(1719333260.482:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.149:22-10.0.0.1:48834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:20.594000 audit[4763]: USER_ACCT pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.597824 sshd[4763]: Accepted publickey for core from 10.0.0.1 port 48834 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:20.598249 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:20.596000 audit[4763]: CRED_ACQ pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.615565 kernel: audit: type=1101 audit(1719333260.594:736): pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.615717 kernel: audit: type=1103 audit(1719333260.596:737): pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.615740 kernel: audit: type=1006 audit(1719333260.597:738): pid=4763 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:34:20.597000 audit[4763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea7b53a70 a2=3 a3=7f8394a4d480 items=0 ppid=1 pid=4763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.597000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:20.628767 systemd-logind[1274]: New session 22 of user core. Jun 25 16:34:20.637404 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:34:20.659000 audit[4763]: USER_START pid=4763 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.663000 audit[4765]: CRED_ACQ pid=4765 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.902698 sshd[4763]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:20.907000 audit[4763]: USER_END pid=4763 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.908000 audit[4763]: CRED_DISP pid=4763 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:20.911019 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:48834.service: Deactivated successfully. Jun 25 16:34:20.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.149:22-10.0.0.1:48834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:20.912026 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:34:20.913211 systemd-logind[1274]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:34:20.918372 systemd-logind[1274]: Removed session 22. Jun 25 16:34:20.997000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.997000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c01367f5e0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:20.997000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:20.998000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.998000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c014123500 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:20.998000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:20.998000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:20.998000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c012f27530 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:20.998000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:21.005000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6279 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:21.005000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c0135fe870 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:21.005000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:21.022000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:21.022000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c0135fe960 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:21.022000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:21.022000 audit[2186]: AVC avc: denied { watch } for pid=2186 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c192,c785 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:21.022000 audit[2186]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c0131ab7a0 a2=fc6 a3=0 items=0 ppid=2007 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c192,c785 key=(null) Jun 25 16:34:21.022000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:34:24.134111 kubelet[2286]: E0625 16:34:24.131937 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:34:24.255103 update_engine[1280]: I0625 16:34:24.254060 1280 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 16:34:24.255103 update_engine[1280]: I0625 16:34:24.254119 1280 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 16:34:24.255103 update_engine[1280]: I0625 16:34:24.254531 1280 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 16:34:24.255103 update_engine[1280]: I0625 16:34:24.255030 1280 omaha_request_params.cc:62] Current group set to stable Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258558 1280 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258582 1280 update_attempter.cc:643] Scheduling an action processor start. Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258601 1280 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258661 1280 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258728 1280 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258733 1280 omaha_request_action.cc:272] Request: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: Jun 25 16:34:24.261119 update_engine[1280]: I0625 16:34:24.258736 1280 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 16:34:24.269520 update_engine[1280]: I0625 16:34:24.268903 1280 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 16:34:24.269730 locksmithd[1296]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 16:34:24.272915 update_engine[1280]: I0625 16:34:24.272814 1280 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 16:34:24.283708 update_engine[1280]: E0625 16:34:24.283624 1280 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 16:34:24.283899 update_engine[1280]: I0625 16:34:24.283808 1280 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 16:34:25.936509 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:48848.service - OpenSSH per-connection server daemon (10.0.0.1:48848). Jun 25 16:34:25.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.149:22-10.0.0.1:48848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:25.950076 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:34:25.950226 kernel: audit: type=1130 audit(1719333265.941:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.149:22-10.0.0.1:48848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:25.988000 audit[4781]: USER_ACCT pid=4781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:25.997001 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 48848 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:25.998862 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:26.006601 systemd-logind[1274]: New session 23 of user core. Jun 25 16:34:25.997000 audit[4781]: CRED_ACQ pid=4781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.044445 kernel: audit: type=1101 audit(1719333265.988:751): pid=4781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.044599 kernel: audit: type=1103 audit(1719333265.997:752): pid=4781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.044648 kernel: audit: type=1006 audit(1719333265.997:753): pid=4781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:34:25.997000 audit[4781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf4ce3a50 a2=3 a3=7fcdceb77480 items=0 ppid=1 pid=4781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.117795 kernel: audit: type=1300 audit(1719333265.997:753): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf4ce3a50 a2=3 a3=7fcdceb77480 items=0 ppid=1 pid=4781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.117889 kernel: audit: type=1327 audit(1719333265.997:753): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:25.997000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:26.121990 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:34:26.134000 audit[4781]: USER_START pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.138000 audit[4783]: CRED_ACQ pid=4783 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.207258 kernel: audit: type=1105 audit(1719333266.134:754): pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.207395 kernel: audit: type=1103 audit(1719333266.138:755): pid=4783 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.371373 sshd[4781]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:26.374000 audit[4781]: USER_END pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.374000 audit[4781]: CRED_DISP pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.418542 kernel: audit: type=1106 audit(1719333266.374:756): pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.418670 kernel: audit: type=1104 audit(1719333266.374:757): pid=4781 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.149:22-10.0.0.1:48848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:26.427059 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:48848.service: Deactivated successfully. Jun 25 16:34:26.439371 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:34:26.447022 systemd-logind[1274]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:34:26.460487 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Jun 25 16:34:26.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.149:22-10.0.0.1:46914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:26.462870 systemd-logind[1274]: Removed session 23. Jun 25 16:34:26.502000 audit[4796]: USER_ACCT pid=4796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.503616 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:26.503000 audit[4796]: CRED_ACQ pid=4796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.503000 audit[4796]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff63801f60 a2=3 a3=7f9244d9a480 items=0 ppid=1 pid=4796 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.503000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:26.505138 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:26.515222 systemd-logind[1274]: New session 24 of user core. Jun 25 16:34:26.520198 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:34:26.523679 containerd[1288]: time="2024-06-25T16:34:26.523639031Z" level=info msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" Jun 25 16:34:26.525000 audit[4796]: USER_START pid=4796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.527000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.576 [WARNING][4815] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--vschb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b863dc49-acd3-403d-a912-7a94220388dd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715", Pod:"coredns-5dd5756b68-vschb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideb3e38e492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.576 [INFO][4815] k8s.go 608: Cleaning up netns ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.576 [INFO][4815] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" iface="eth0" netns="" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.576 [INFO][4815] k8s.go 615: Releasing IP address(es) ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.576 [INFO][4815] utils.go 188: Calico CNI releasing IP address ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.641 [INFO][4823] ipam_plugin.go 411: Releasing address using handleID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.641 [INFO][4823] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.642 [INFO][4823] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.648 [WARNING][4823] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.648 [INFO][4823] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.650 [INFO][4823] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:26.654257 containerd[1288]: 2024-06-25 16:34:26.652 [INFO][4815] k8s.go 621: Teardown processing complete. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.654715 containerd[1288]: time="2024-06-25T16:34:26.654312756Z" level=info msg="TearDown network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" successfully" Jun 25 16:34:26.654715 containerd[1288]: time="2024-06-25T16:34:26.654365405Z" level=info msg="StopPodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" returns successfully" Jun 25 16:34:26.655300 containerd[1288]: time="2024-06-25T16:34:26.655244563Z" level=info msg="RemovePodSandbox for \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" Jun 25 16:34:26.655475 containerd[1288]: time="2024-06-25T16:34:26.655315247Z" level=info msg="Forcibly stopping sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\"" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.780 [WARNING][4844] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--vschb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b863dc49-acd3-403d-a912-7a94220388dd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2df3e92d3501f48c84fd994a5cbff9e09e969ab37df2af59ba9814e943b715", Pod:"coredns-5dd5756b68-vschb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideb3e38e492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.780 [INFO][4844] k8s.go 608: Cleaning up netns ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.780 [INFO][4844] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" iface="eth0" netns="" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.780 [INFO][4844] k8s.go 615: Releasing IP address(es) ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.780 [INFO][4844] utils.go 188: Calico CNI releasing IP address ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.816 [INFO][4851] ipam_plugin.go 411: Releasing address using handleID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.816 [INFO][4851] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.816 [INFO][4851] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.826 [WARNING][4851] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.826 [INFO][4851] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" HandleID="k8s-pod-network.c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Workload="localhost-k8s-coredns--5dd5756b68--vschb-eth0" Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.831 [INFO][4851] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:26.835056 containerd[1288]: 2024-06-25 16:34:26.833 [INFO][4844] k8s.go 621: Teardown processing complete. ContainerID="c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273" Jun 25 16:34:26.835686 containerd[1288]: time="2024-06-25T16:34:26.835111583Z" level=info msg="TearDown network for sandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" successfully" Jun 25 16:34:27.194602 containerd[1288]: time="2024-06-25T16:34:27.194454991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:27.194602 containerd[1288]: time="2024-06-25T16:34:27.194551553Z" level=info msg="RemovePodSandbox \"c06c30e64a2e6280c5414775a3b94d53e30bd422de208192ab1db8fadd774273\" returns successfully" Jun 25 16:34:27.196289 containerd[1288]: time="2024-06-25T16:34:27.196248773Z" level=info msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.240 [WARNING][4879] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p68b4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1ecf1669-3c1d-4bb9-be93-082a2bca0c94", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c", Pod:"coredns-5dd5756b68-p68b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3d2f637112", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.240 [INFO][4879] k8s.go 608: Cleaning up netns ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.240 [INFO][4879] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" iface="eth0" netns="" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.240 [INFO][4879] k8s.go 615: Releasing IP address(es) ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.240 [INFO][4879] utils.go 188: Calico CNI releasing IP address ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.301 [INFO][4887] ipam_plugin.go 411: Releasing address using handleID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.301 [INFO][4887] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.301 [INFO][4887] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.310 [WARNING][4887] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.310 [INFO][4887] ipam_plugin.go 439: Releasing address using workloadID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.313 [INFO][4887] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:27.317183 containerd[1288]: 2024-06-25 16:34:27.315 [INFO][4879] k8s.go 621: Teardown processing complete. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.318730 containerd[1288]: time="2024-06-25T16:34:27.317176958Z" level=info msg="TearDown network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" successfully" Jun 25 16:34:27.318730 containerd[1288]: time="2024-06-25T16:34:27.317223296Z" level=info msg="StopPodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" returns successfully" Jun 25 16:34:27.319090 containerd[1288]: time="2024-06-25T16:34:27.319036564Z" level=info msg="RemovePodSandbox for \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" Jun 25 16:34:27.319255 containerd[1288]: time="2024-06-25T16:34:27.319212486Z" level=info msg="Forcibly stopping sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\"" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.361 [WARNING][4910] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p68b4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1ecf1669-3c1d-4bb9-be93-082a2bca0c94", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"613c212012c34d9f26865041830514d7189cbbf4af30a8b6ac8a09998231f69c", Pod:"coredns-5dd5756b68-p68b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3d2f637112", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.361 [INFO][4910] k8s.go 608: Cleaning up netns ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.361 [INFO][4910] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" iface="eth0" netns="" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.361 [INFO][4910] k8s.go 615: Releasing IP address(es) ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.361 [INFO][4910] utils.go 188: Calico CNI releasing IP address ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.377 [INFO][4918] ipam_plugin.go 411: Releasing address using handleID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.377 [INFO][4918] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.377 [INFO][4918] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.383 [WARNING][4918] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.384 [INFO][4918] ipam_plugin.go 439: Releasing address using workloadID ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" HandleID="k8s-pod-network.56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Workload="localhost-k8s-coredns--5dd5756b68--p68b4-eth0" Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.386 [INFO][4918] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:27.389768 containerd[1288]: 2024-06-25 16:34:27.388 [INFO][4910] k8s.go 621: Teardown processing complete. ContainerID="56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b" Jun 25 16:34:27.390516 containerd[1288]: time="2024-06-25T16:34:27.389849583Z" level=info msg="TearDown network for sandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" successfully" Jun 25 16:34:27.619795 containerd[1288]: time="2024-06-25T16:34:27.618902183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:27.619795 containerd[1288]: time="2024-06-25T16:34:27.619010046Z" level=info msg="RemovePodSandbox \"56b196ee3993152b7406a72c8c010e6390421cf14fb4ba5b53e509f45775580b\" returns successfully" Jun 25 16:34:27.619795 containerd[1288]: time="2024-06-25T16:34:27.619694656Z" level=info msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.654 [WARNING][4940] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xkz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72bf43a2-ad8b-409f-8c68-9b745ebeb647", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245", Pod:"csi-node-driver-7xkz9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2c412db749b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.655 [INFO][4940] k8s.go 608: Cleaning up netns ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.655 [INFO][4940] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" iface="eth0" netns="" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.655 [INFO][4940] k8s.go 615: Releasing IP address(es) ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.655 [INFO][4940] utils.go 188: Calico CNI releasing IP address ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.675 [INFO][4949] ipam_plugin.go 411: Releasing address using handleID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.676 [INFO][4949] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.676 [INFO][4949] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.682 [WARNING][4949] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.682 [INFO][4949] ipam_plugin.go 439: Releasing address using workloadID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.684 [INFO][4949] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:27.687089 containerd[1288]: 2024-06-25 16:34:27.685 [INFO][4940] k8s.go 621: Teardown processing complete. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.687675 containerd[1288]: time="2024-06-25T16:34:27.687125990Z" level=info msg="TearDown network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" successfully" Jun 25 16:34:27.687675 containerd[1288]: time="2024-06-25T16:34:27.687165585Z" level=info msg="StopPodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" returns successfully" Jun 25 16:34:27.687740 containerd[1288]: time="2024-06-25T16:34:27.687715862Z" level=info msg="RemovePodSandbox for \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" Jun 25 16:34:27.687816 containerd[1288]: time="2024-06-25T16:34:27.687767549Z" level=info msg="Forcibly stopping sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\"" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.725 [WARNING][4973] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xkz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72bf43a2-ad8b-409f-8c68-9b745ebeb647", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56c556ee4f50a2aa289bf983fb9480ba68fec56e71204de9d573db54809a0245", Pod:"csi-node-driver-7xkz9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2c412db749b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.725 [INFO][4973] k8s.go 608: Cleaning up netns ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.725 [INFO][4973] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" iface="eth0" netns="" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.725 [INFO][4973] k8s.go 615: Releasing IP address(es) ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.725 [INFO][4973] utils.go 188: Calico CNI releasing IP address ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.745 [INFO][4981] ipam_plugin.go 411: Releasing address using handleID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.745 [INFO][4981] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.745 [INFO][4981] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.752 [WARNING][4981] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.753 [INFO][4981] ipam_plugin.go 439: Releasing address using workloadID ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" HandleID="k8s-pod-network.665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Workload="localhost-k8s-csi--node--driver--7xkz9-eth0" Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.754 [INFO][4981] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:27.757949 containerd[1288]: 2024-06-25 16:34:27.756 [INFO][4973] k8s.go 621: Teardown processing complete. ContainerID="665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b" Jun 25 16:34:27.758461 containerd[1288]: time="2024-06-25T16:34:27.758000685Z" level=info msg="TearDown network for sandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" successfully" Jun 25 16:34:27.928702 containerd[1288]: time="2024-06-25T16:34:27.928529736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:27.928702 containerd[1288]: time="2024-06-25T16:34:27.928644843Z" level=info msg="RemovePodSandbox \"665bdee73b3ab2877c80464e2b60d1dbdac954c662ddf79f4f4ddbfdecde6f6b\" returns successfully" Jun 25 16:34:27.929620 containerd[1288]: time="2024-06-25T16:34:27.929557644Z" level=info msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:27.989 [WARNING][5004] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0", GenerateName:"calico-kube-controllers-7dfd458b6c-", Namespace:"calico-system", SelfLink:"", UID:"18658f3c-a24d-421b-be97-f9cb52930d97", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd458b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62", Pod:"calico-kube-controllers-7dfd458b6c-tdlbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali843439c21e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:27.989 [INFO][5004] k8s.go 608: Cleaning up netns ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:27.989 [INFO][5004] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" iface="eth0" netns="" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:27.989 [INFO][5004] k8s.go 615: Releasing IP address(es) ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:27.989 [INFO][5004] utils.go 188: Calico CNI releasing IP address ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.012 [INFO][5011] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.012 [INFO][5011] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.013 [INFO][5011] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.018 [WARNING][5011] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.018 [INFO][5011] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.020 [INFO][5011] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:28.023155 containerd[1288]: 2024-06-25 16:34:28.021 [INFO][5004] k8s.go 621: Teardown processing complete. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.023657 containerd[1288]: time="2024-06-25T16:34:28.023204852Z" level=info msg="TearDown network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" successfully" Jun 25 16:34:28.023657 containerd[1288]: time="2024-06-25T16:34:28.023242612Z" level=info msg="StopPodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" returns successfully" Jun 25 16:34:28.023707 containerd[1288]: time="2024-06-25T16:34:28.023665649Z" level=info msg="RemovePodSandbox for \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" Jun 25 16:34:28.023753 containerd[1288]: time="2024-06-25T16:34:28.023698542Z" level=info msg="Forcibly stopping sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\"" Jun 25 16:34:28.040014 sshd[4796]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:28.040000 audit[4796]: USER_END pid=4796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.040000 audit[4796]: CRED_DISP pid=4796 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.048392 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:46914.service: Deactivated successfully. Jun 25 16:34:28.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.149:22-10.0.0.1:46914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:28.049046 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:34:28.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.149:22-10.0.0.1:46930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:28.052160 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:46930.service - OpenSSH per-connection server daemon (10.0.0.1:46930). Jun 25 16:34:28.053302 systemd-logind[1274]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:34:28.054572 systemd-logind[1274]: Removed session 24. Jun 25 16:34:28.081000 audit[5042]: USER_ACCT pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.082158 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 46930 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:28.082000 audit[5042]: CRED_ACQ pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.082000 audit[5042]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe277bbbc0 a2=3 a3=7f8d2df11480 items=0 ppid=1 pid=5042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:28.082000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:28.083726 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:28.092294 systemd-logind[1274]: New session 25 of user core. Jun 25 16:34:28.102663 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:34:28.111000 audit[5042]: USER_START pid=5042 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.113000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.073 [WARNING][5034] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0", GenerateName:"calico-kube-controllers-7dfd458b6c-", Namespace:"calico-system", SelfLink:"", UID:"18658f3c-a24d-421b-be97-f9cb52930d97", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd458b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c09455506f460958ef0bbbbf679128ad2dda6a537cf3e6a1f26b9c6e2a6ae62", Pod:"calico-kube-controllers-7dfd458b6c-tdlbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali843439c21e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.074 [INFO][5034] k8s.go 608: Cleaning up netns ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.074 [INFO][5034] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" iface="eth0" netns="" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.074 [INFO][5034] k8s.go 615: Releasing IP address(es) ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.074 [INFO][5034] utils.go 188: Calico CNI releasing IP address ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.103 [INFO][5046] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.103 [INFO][5046] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.103 [INFO][5046] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.110 [WARNING][5046] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.110 [INFO][5046] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" HandleID="k8s-pod-network.fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Workload="localhost-k8s-calico--kube--controllers--7dfd458b6c--tdlbz-eth0" Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.113 [INFO][5046] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:28.116213 containerd[1288]: 2024-06-25 16:34:28.114 [INFO][5034] k8s.go 621: Teardown processing complete. ContainerID="fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166" Jun 25 16:34:28.116706 containerd[1288]: time="2024-06-25T16:34:28.116237040Z" level=info msg="TearDown network for sandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" successfully" Jun 25 16:34:28.288009 containerd[1288]: time="2024-06-25T16:34:28.287948949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:28.288427 containerd[1288]: time="2024-06-25T16:34:28.288291255Z" level=info msg="RemovePodSandbox \"fd3cfd05ce067c4c21a7f687c74bc5f738f21996bc1b2d4133b61b8cc25e0166\" returns successfully" Jun 25 16:34:29.063000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:29.063000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002ba1d00 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:29.063000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:29.064000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:29.064000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00282af20 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:29.064000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:29.064000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:29.064000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002ba1ea0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:29.064000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:29.072000 audit[2123]: AVC avc: denied { watch } for pid=2123 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6262 scontext=system_u:system_r:container_t:s0:c62,c284 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:29.072000 audit[2123]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00282b0c0 a2=fc6 a3=0 items=0 ppid=2006 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c62,c284 key=(null) Jun 25 16:34:29.072000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:29.246000 audit[5075]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:29.246000 audit[5075]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc4f251cd0 a2=0 a3=7ffc4f251cbc items=0 ppid=2478 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:29.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:29.247000 audit[5075]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:29.247000 audit[5075]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4f251cd0 a2=0 a3=0 items=0 ppid=2478 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:29.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:29.261000 audit[5077]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:29.261000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffce4f221e0 a2=0 a3=7ffce4f221cc items=0 ppid=2478 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:29.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:29.264000 audit[5077]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:29.264000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce4f221e0 a2=0 a3=0 items=0 ppid=2478 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:29.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:29.277718 sshd[5042]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:29.284000 audit[5042]: USER_END pid=5042 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:29.284000 audit[5042]: CRED_DISP pid=5042 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:29.290897 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:46930.service: Deactivated successfully. Jun 25 16:34:29.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.149:22-10.0.0.1:46930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:29.291901 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:34:29.298589 systemd-logind[1274]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:34:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.149:22-10.0.0.1:46936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:29.301954 systemd[1]: Started sshd@25-10.0.0.149:22-10.0.0.1:46936.service - OpenSSH per-connection server daemon (10.0.0.1:46936). Jun 25 16:34:29.304238 systemd-logind[1274]: Removed session 25. Jun 25 16:34:29.355000 audit[5080]: USER_ACCT pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:29.357475 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 46936 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:29.356000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:29.356000 audit[5080]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7cc95e10 a2=3 a3=7f601da0b480 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:29.356000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:29.357943 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:29.366488 systemd-logind[1274]: New session 26 of user core. Jun 25 16:34:29.373918 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:34:29.386000 audit[5080]: USER_START pid=5080 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:29.389000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.029450 sshd[5080]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:30.030000 audit[5080]: USER_END pid=5080 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.030000 audit[5080]: CRED_DISP pid=5080 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.149:22-10.0.0.1:46936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:30.038594 systemd[1]: sshd@25-10.0.0.149:22-10.0.0.1:46936.service: Deactivated successfully. Jun 25 16:34:30.039196 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:34:30.040048 systemd-logind[1274]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:34:30.045287 systemd[1]: Started sshd@26-10.0.0.149:22-10.0.0.1:46946.service - OpenSSH per-connection server daemon (10.0.0.1:46946). Jun 25 16:34:30.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.149:22-10.0.0.1:46946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:30.046634 systemd-logind[1274]: Removed session 26. Jun 25 16:34:30.075000 audit[5092]: USER_ACCT pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.076365 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 46946 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:30.076000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.076000 audit[5092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdea60e540 a2=3 a3=7f27493b2480 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.076000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:30.078009 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:30.094369 systemd[1]: run-containerd-runc-k8s.io-913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5-runc.fScjH0.mount: Deactivated successfully. Jun 25 16:34:30.099602 systemd-logind[1274]: New session 27 of user core. Jun 25 16:34:30.103891 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:34:30.110000 audit[5092]: USER_START pid=5092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.112000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.187773 kubelet[2286]: E0625 16:34:30.187719 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:34:30.310519 sshd[5092]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:30.312000 audit[5092]: USER_END pid=5092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.312000 audit[5092]: CRED_DISP pid=5092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:30.314226 systemd[1]: sshd@26-10.0.0.149:22-10.0.0.1:46946.service: Deactivated successfully. Jun 25 16:34:30.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.149:22-10.0.0.1:46946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:30.315201 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:34:30.315832 systemd-logind[1274]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:34:30.316553 systemd-logind[1274]: Removed session 27. Jun 25 16:34:34.174694 update_engine[1280]: I0625 16:34:34.174636 1280 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 16:34:34.175076 update_engine[1280]: I0625 16:34:34.174899 1280 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 16:34:34.175105 update_engine[1280]: I0625 16:34:34.175081 1280 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 16:34:34.190032 update_engine[1280]: E0625 16:34:34.190011 1280 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 16:34:34.190107 update_engine[1280]: I0625 16:34:34.190090 1280 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 16:34:35.322849 systemd[1]: Started sshd@27-10.0.0.149:22-10.0.0.1:46956.service - OpenSSH per-connection server daemon (10.0.0.1:46956). Jun 25 16:34:35.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:46956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:35.404256 kernel: kauditd_printk_skb: 69 callbacks suppressed Jun 25 16:34:35.404427 kernel: audit: type=1130 audit(1719333275.322:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:46956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:35.426000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.427952 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 46956 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:35.432547 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:35.433249 kernel: audit: type=1101 audit(1719333275.426:804): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.433291 kernel: audit: type=1103 audit(1719333275.431:805): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.431000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.437796 systemd-logind[1274]: New session 28 of user core. Jun 25 16:34:35.526781 kernel: audit: type=1006 audit(1719333275.431:806): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:34:35.526916 kernel: audit: type=1300 audit(1719333275.431:806): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc631c6c0 a2=3 a3=7f6fa6641480 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:35.431000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc631c6c0 a2=3 a3=7f6fa6641480 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:35.431000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:35.531686 kernel: audit: type=1327 audit(1719333275.431:806): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:35.537287 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:34:35.545000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.547000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.602266 kernel: audit: type=1105 audit(1719333275.545:807): pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:35.602412 kernel: audit: type=1103 audit(1719333275.547:808): pid=5129 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:36.163122 sshd[5126]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:36.163000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:36.167338 systemd[1]: sshd@27-10.0.0.149:22-10.0.0.1:46956.service: Deactivated successfully. Jun 25 16:34:36.168449 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:34:36.169235 systemd-logind[1274]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:34:36.172983 systemd-logind[1274]: Removed session 28. Jun 25 16:34:36.164000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:36.178283 kernel: audit: type=1106 audit(1719333276.163:809): pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:36.178352 kernel: audit: type=1104 audit(1719333276.164:810): pid=5126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:36.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:46956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:36.523000 audit[5146]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:36.523000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7fffdd17dca0 a2=0 a3=7fffdd17dc8c items=0 ppid=2478 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:36.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:36.524000 audit[5146]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:36.524000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffdd17dca0 a2=0 a3=0 items=0 ppid=2478 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:36.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:36.594242 kubelet[2286]: I0625 16:34:36.590647 2286 topology_manager.go:215] "Topology Admit Handler" podUID="205a8ef3-eba7-4d98-beb6-5375b06fe275" podNamespace="calico-apiserver" podName="calico-apiserver-748964d7b4-blslg" Jun 25 16:34:36.609260 systemd[1]: Created slice kubepods-besteffort-pod205a8ef3_eba7_4d98_beb6_5375b06fe275.slice - libcontainer container kubepods-besteffort-pod205a8ef3_eba7_4d98_beb6_5375b06fe275.slice. Jun 25 16:34:36.619000 audit[5148]: NETFILTER_CFG table=filter:119 family=2 entries=34 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:36.619000 audit[5148]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffca2b36f10 a2=0 a3=7ffca2b36efc items=0 ppid=2478 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:36.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:36.624000 audit[5148]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:36.624000 audit[5148]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca2b36f10 a2=0 a3=0 items=0 ppid=2478 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:36.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:36.707784 kubelet[2286]: I0625 16:34:36.707688 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/205a8ef3-eba7-4d98-beb6-5375b06fe275-calico-apiserver-certs\") pod \"calico-apiserver-748964d7b4-blslg\" (UID: \"205a8ef3-eba7-4d98-beb6-5375b06fe275\") " pod="calico-apiserver/calico-apiserver-748964d7b4-blslg" Jun 25 16:34:36.707784 kubelet[2286]: I0625 16:34:36.707778 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbvf\" (UniqueName: \"kubernetes.io/projected/205a8ef3-eba7-4d98-beb6-5375b06fe275-kube-api-access-ntbvf\") pod \"calico-apiserver-748964d7b4-blslg\" (UID: \"205a8ef3-eba7-4d98-beb6-5375b06fe275\") " pod="calico-apiserver/calico-apiserver-748964d7b4-blslg" Jun 25 16:34:36.810570 kubelet[2286]: E0625 16:34:36.809977 2286 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:34:36.816534 kubelet[2286]: E0625 16:34:36.816476 2286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/205a8ef3-eba7-4d98-beb6-5375b06fe275-calico-apiserver-certs podName:205a8ef3-eba7-4d98-beb6-5375b06fe275 nodeName:}" failed. No retries permitted until 2024-06-25 16:34:37.310775374 +0000 UTC m=+131.311054876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/205a8ef3-eba7-4d98-beb6-5375b06fe275-calico-apiserver-certs") pod "calico-apiserver-748964d7b4-blslg" (UID: "205a8ef3-eba7-4d98-beb6-5375b06fe275") : secret "calico-apiserver-certs" not found Jun 25 16:34:37.540094 containerd[1288]: time="2024-06-25T16:34:37.540036594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748964d7b4-blslg,Uid:205a8ef3-eba7-4d98-beb6-5375b06fe275,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:34:38.232949 systemd-networkd[1111]: caliaf4260c0a20: Link UP Jun 25 16:34:38.268523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:34:38.268670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaf4260c0a20: link becomes ready Jun 25 16:34:38.269031 systemd-networkd[1111]: caliaf4260c0a20: Gained carrier Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.020 [INFO][5151] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0 calico-apiserver-748964d7b4- calico-apiserver 205a8ef3-eba7-4d98-beb6-5375b06fe275 1266 0 2024-06-25 16:34:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748964d7b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-748964d7b4-blslg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf4260c0a20 [] []}} ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.021 [INFO][5151] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.056 [INFO][5165] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" HandleID="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Workload="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.071 [INFO][5165] ipam_plugin.go 264: Auto assigning IP ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" HandleID="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Workload="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003662d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-748964d7b4-blslg", "timestamp":"2024-06-25 16:34:38.056448327 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.071 [INFO][5165] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.071 [INFO][5165] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.071 [INFO][5165] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.077 [INFO][5165] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.085 [INFO][5165] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.102 [INFO][5165] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.109 [INFO][5165] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.123 [INFO][5165] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.123 [INFO][5165] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.142 [INFO][5165] ipam.go 1685: Creating new handle: k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.155 [INFO][5165] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.221 [INFO][5165] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.221 [INFO][5165] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" host="localhost" Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.221 [INFO][5165] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:38.301663 containerd[1288]: 2024-06-25 16:34:38.221 [INFO][5165] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" HandleID="k8s-pod-network.6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Workload="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.229 [INFO][5151] k8s.go 386: Populated endpoint ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0", GenerateName:"calico-apiserver-748964d7b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"205a8ef3-eba7-4d98-beb6-5375b06fe275", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748964d7b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-748964d7b4-blslg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf4260c0a20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.229 [INFO][5151] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.229 [INFO][5151] dataplane_linux.go 68: Setting the host side veth name to caliaf4260c0a20 ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.277 [INFO][5151] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.278 [INFO][5151] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0", GenerateName:"calico-apiserver-748964d7b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"205a8ef3-eba7-4d98-beb6-5375b06fe275", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748964d7b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e", Pod:"calico-apiserver-748964d7b4-blslg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf4260c0a20", MAC:"d6:16:75:be:71:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:38.302481 containerd[1288]: 2024-06-25 16:34:38.293 [INFO][5151] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e" Namespace="calico-apiserver" Pod="calico-apiserver-748964d7b4-blslg" WorkloadEndpoint="localhost-k8s-calico--apiserver--748964d7b4--blslg-eth0" Jun 25 16:34:38.322000 audit[5187]: NETFILTER_CFG table=filter:121 family=2 entries=61 op=nft_register_chain pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:38.322000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=30316 a0=3 a1=7ffe2ba00520 a2=0 a3=7ffe2ba0050c items=0 ppid=3654 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:38.322000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:38.383594 containerd[1288]: time="2024-06-25T16:34:38.383392836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:38.383594 containerd[1288]: time="2024-06-25T16:34:38.383574418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:38.384231 containerd[1288]: time="2024-06-25T16:34:38.384193575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:38.384730 containerd[1288]: time="2024-06-25T16:34:38.384637341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:38.412350 systemd[1]: run-containerd-runc-k8s.io-6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e-runc.z9QaNi.mount: Deactivated successfully. Jun 25 16:34:38.424165 systemd[1]: Started cri-containerd-6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e.scope - libcontainer container 6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e. Jun 25 16:34:38.443000 audit: BPF prog-id=177 op=LOAD Jun 25 16:34:38.443000 audit: BPF prog-id=178 op=LOAD Jun 25 16:34:38.443000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5196 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:38.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626664303366333234363665333065616434373764653531326138 Jun 25 16:34:38.443000 audit: BPF prog-id=179 op=LOAD Jun 25 16:34:38.443000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5196 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:38.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626664303366333234363665333065616434373764653531326138 Jun 25 16:34:38.444000 audit: BPF prog-id=179 op=UNLOAD Jun 25 16:34:38.444000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:34:38.444000 audit: BPF prog-id=180 op=LOAD Jun 25 16:34:38.444000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5196 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:38.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626664303366333234363665333065616434373764653531326138 Jun 25 16:34:38.446735 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:38.504951 containerd[1288]: time="2024-06-25T16:34:38.504412739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748964d7b4-blslg,Uid:205a8ef3-eba7-4d98-beb6-5375b06fe275,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e\"" Jun 25 16:34:38.508197 containerd[1288]: time="2024-06-25T16:34:38.506071845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:34:40.257871 systemd-networkd[1111]: caliaf4260c0a20: Gained IPv6LL Jun 25 16:34:41.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:56618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:41.192415 systemd[1]: Started sshd@28-10.0.0.149:22-10.0.0.1:56618.service - OpenSSH per-connection server daemon (10.0.0.1:56618). Jun 25 16:34:41.193535 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 16:34:41.193582 kernel: audit: type=1130 audit(1719333281.191:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:56618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:41.300000 audit[5238]: USER_ACCT pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.303209 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 56618 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:41.303167 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:41.301000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.313180 kernel: audit: type=1101 audit(1719333281.300:824): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.313401 kernel: audit: type=1103 audit(1719333281.301:825): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.313432 kernel: audit: type=1006 audit(1719333281.301:826): pid=5238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 16:34:41.317380 kernel: audit: type=1300 audit(1719333281.301:826): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc8a0b520 a2=3 a3=7fb288a41480 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.301000 audit[5238]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc8a0b520 a2=3 a3=7fb288a41480 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.322463 kernel: audit: type=1327 audit(1719333281.301:826): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:41.301000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:41.326561 systemd-logind[1274]: New session 29 of user core. Jun 25 16:34:41.336542 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 16:34:41.344000 audit[5238]: USER_START pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.351897 kernel: audit: type=1105 audit(1719333281.344:827): pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.351000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.357420 kernel: audit: type=1103 audit(1719333281.351:828): pid=5240 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.598879 sshd[5238]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:41.599000 audit[5238]: USER_END pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.602953 systemd[1]: sshd@28-10.0.0.149:22-10.0.0.1:56618.service: Deactivated successfully. Jun 25 16:34:41.604009 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 16:34:41.604823 systemd-logind[1274]: Session 29 logged out. Waiting for processes to exit. Jun 25 16:34:41.606027 systemd-logind[1274]: Removed session 29. Jun 25 16:34:41.599000 audit[5238]: CRED_DISP pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.617665 kernel: audit: type=1106 audit(1719333281.599:829): pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.617809 kernel: audit: type=1104 audit(1719333281.599:830): pid=5238 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:41.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:56618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:44.172932 update_engine[1280]: I0625 16:34:44.172838 1280 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 16:34:44.173433 update_engine[1280]: I0625 16:34:44.173151 1280 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 16:34:44.173433 update_engine[1280]: I0625 16:34:44.173357 1280 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 16:34:44.182367 update_engine[1280]: E0625 16:34:44.182301 1280 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 16:34:44.182575 update_engine[1280]: I0625 16:34:44.182416 1280 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 16:34:45.743470 containerd[1288]: time="2024-06-25T16:34:45.742346436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:45.764633 containerd[1288]: time="2024-06-25T16:34:45.764538256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:34:45.792000 audit[5260]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:45.792000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe2c8ec250 a2=0 a3=7ffe2c8ec23c items=0 ppid=2478 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:45.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:45.795000 audit[5260]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:45.795000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe2c8ec250 a2=0 a3=7ffe2c8ec23c items=0 ppid=2478 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:45.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:45.809481 containerd[1288]: time="2024-06-25T16:34:45.809431397Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:45.832406 containerd[1288]: time="2024-06-25T16:34:45.832323227Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:45.838002 containerd[1288]: time="2024-06-25T16:34:45.837939415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:45.839153 containerd[1288]: time="2024-06-25T16:34:45.839118406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 7.333014261s" Jun 25 16:34:45.839295 containerd[1288]: time="2024-06-25T16:34:45.839275472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:34:45.849953 containerd[1288]: time="2024-06-25T16:34:45.849889264Z" level=info msg="CreateContainer within sandbox \"6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:34:46.112451 containerd[1288]: time="2024-06-25T16:34:46.101904814Z" level=info msg="CreateContainer within sandbox \"6bbfd03f32466e30ead477de512a8644f7b75d8e237c41f22aefd836607d606e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8\"" Jun 25 16:34:46.112451 containerd[1288]: time="2024-06-25T16:34:46.107155143Z" level=info msg="StartContainer for \"78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8\"" Jun 25 16:34:46.284944 systemd[1]: run-containerd-runc-k8s.io-78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8-runc.BTZjhI.mount: Deactivated successfully. Jun 25 16:34:46.307959 systemd[1]: Started cri-containerd-78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8.scope - libcontainer container 78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8. Jun 25 16:34:46.338053 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:34:46.338241 kernel: audit: type=1334 audit(1719333286.335:834): prog-id=181 op=LOAD Jun 25 16:34:46.335000 audit: BPF prog-id=181 op=LOAD Jun 25 16:34:46.347000 audit: BPF prog-id=182 op=LOAD Jun 25 16:34:46.357663 kernel: audit: type=1334 audit(1719333286.347:835): prog-id=182 op=LOAD Jun 25 16:34:46.357771 kernel: audit: type=1300 audit(1719333286.347:835): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=5196 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.347000 audit[5274]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=5196 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.366156 kernel: audit: type=1327 audit(1719333286.347:835): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616661373235333431386165333234373531363630373632623866 Jun 25 16:34:46.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616661373235333431386165333234373531363630373632623866 Jun 25 16:34:46.376768 kernel: audit: type=1334 audit(1719333286.348:836): prog-id=183 op=LOAD Jun 25 16:34:46.376926 kernel: audit: type=1300 audit(1719333286.348:836): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=5196 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.348000 audit: BPF prog-id=183 op=LOAD Jun 25 16:34:46.348000 audit[5274]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=5196 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616661373235333431386165333234373531363630373632623866 Jun 25 16:34:46.407094 kernel: audit: type=1327 audit(1719333286.348:836): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616661373235333431386165333234373531363630373632623866 Jun 25 16:34:46.407252 kernel: audit: type=1334 audit(1719333286.348:837): prog-id=183 op=UNLOAD Jun 25 16:34:46.407281 kernel: audit: type=1334 audit(1719333286.348:838): prog-id=182 op=UNLOAD Jun 25 16:34:46.348000 audit: BPF prog-id=183 op=UNLOAD Jun 25 16:34:46.348000 audit: BPF prog-id=182 op=UNLOAD Jun 25 16:34:46.408820 kernel: audit: type=1334 audit(1719333286.351:839): prog-id=184 op=LOAD Jun 25 16:34:46.351000 audit: BPF prog-id=184 op=LOAD Jun 25 16:34:46.351000 audit[5274]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=5196 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616661373235333431386165333234373531363630373632623866 Jun 25 16:34:46.529038 containerd[1288]: time="2024-06-25T16:34:46.527067890Z" level=info msg="StartContainer for \"78afa7253418ae324751660762b8fa27f036a6de86f1d6f9723e88c7127c3dc8\" returns successfully" Jun 25 16:34:46.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.149:22-10.0.0.1:36320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:46.622506 systemd[1]: Started sshd@29-10.0.0.149:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). Jun 25 16:34:46.684000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:46.688010 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:46.686000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:46.686000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff680ac9f0 a2=3 a3=7feed1fda480 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.686000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:46.688425 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:46.698479 systemd-logind[1274]: New session 30 of user core. Jun 25 16:34:46.707261 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 16:34:46.726000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:46.741000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:46.881000 audit[5317]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:46.881000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff3bf86870 a2=0 a3=7fff3bf8685c items=0 ppid=2478 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:46.887000 audit[5317]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:46.887000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7fff3bf86870 a2=0 a3=7fff3bf8685c items=0 ppid=2478 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:46.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:47.026237 sshd[5305]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:47.031000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:47.032000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:47.034878 systemd[1]: sshd@29-10.0.0.149:22-10.0.0.1:36320.service: Deactivated successfully. Jun 25 16:34:47.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.149:22-10.0.0.1:36320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:47.041035 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 16:34:47.042103 systemd-logind[1274]: Session 30 logged out. Waiting for processes to exit. Jun 25 16:34:47.043191 systemd-logind[1274]: Removed session 30. Jun 25 16:34:47.774619 kubelet[2286]: I0625 16:34:47.774561 2286 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748964d7b4-blslg" podStartSLOduration=4.440444128 podCreationTimestamp="2024-06-25 16:34:36 +0000 UTC" firstStartedPulling="2024-06-25 16:34:38.505609453 +0000 UTC m=+132.505888945" lastFinishedPulling="2024-06-25 16:34:45.839669655 +0000 UTC m=+139.839949147" observedRunningTime="2024-06-25 16:34:46.741249989 +0000 UTC m=+140.741529481" watchObservedRunningTime="2024-06-25 16:34:47.77450433 +0000 UTC m=+141.774783823" Jun 25 16:34:47.927000 audit[5321]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:47.927000 audit[5321]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffecbe5f400 a2=0 a3=7ffecbe5f3ec items=0 ppid=2478 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:47.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:47.928000 audit[5321]: NETFILTER_CFG table=nat:127 family=2 entries=51 op=nft_register_chain pid=5321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:47.928000 audit[5321]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffecbe5f400 a2=0 a3=7ffecbe5f3ec items=0 ppid=2478 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:47.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.558310 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:34:51.558514 kernel: audit: type=1325 audit(1719333291.542:853): table=filter:128 family=2 entries=8 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.558555 kernel: audit: type=1300 audit(1719333291.542:853): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd08611a60 a2=0 a3=7ffd08611a4c items=0 ppid=2478 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.542000 audit[5343]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.542000 audit[5343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd08611a60 a2=0 a3=7ffd08611a4c items=0 ppid=2478 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.561767 kernel: audit: type=1327 audit(1719333291.542:853): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.550000 audit[5343]: NETFILTER_CFG table=nat:129 family=2 entries=54 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.550000 audit[5343]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffd08611a60 a2=0 a3=7ffd08611a4c items=0 ppid=2478 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.604972 kernel: audit: type=1325 audit(1719333291.550:854): table=nat:129 family=2 entries=54 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.605168 kernel: audit: type=1300 audit(1719333291.550:854): arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffd08611a60 a2=0 a3=7ffd08611a4c items=0 ppid=2478 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.605207 kernel: audit: type=1327 audit(1719333291.550:854): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.886000 audit[5345]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.886000 audit[5345]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff96978bd0 a2=0 a3=7fff96978bbc items=0 ppid=2478 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.903068 kernel: audit: type=1325 audit(1719333291.886:855): table=filter:130 family=2 entries=8 op=nft_register_rule pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.903306 kernel: audit: type=1300 audit(1719333291.886:855): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff96978bd0 a2=0 a3=7fff96978bbc items=0 ppid=2478 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.903395 kernel: audit: type=1327 audit(1719333291.886:855): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:51.908375 kernel: audit: type=1325 audit(1719333291.889:856): table=nat:131 family=2 entries=58 op=nft_register_chain pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.889000 audit[5345]: NETFILTER_CFG table=nat:131 family=2 entries=58 op=nft_register_chain pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:51.889000 audit[5345]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7fff96978bd0 a2=0 a3=7fff96978bbc items=0 ppid=2478 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:51.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:52.042946 systemd[1]: Started sshd@30-10.0.0.149:22-10.0.0.1:36330.service - OpenSSH per-connection server daemon (10.0.0.1:36330). Jun 25 16:34:52.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.149:22-10.0.0.1:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:52.139000 audit[5347]: USER_ACCT pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.144000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.144000 audit[5347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff52acf1e0 a2=3 a3=7ff7e3d59480 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:52.144000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:52.147805 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 36330 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:52.146320 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:52.159887 systemd-logind[1274]: New session 31 of user core. Jun 25 16:34:52.169571 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 16:34:52.192000 audit[5347]: USER_START pid=5347 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.203000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.414003 sshd[5347]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:52.417000 audit[5347]: USER_END pid=5347 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.417000 audit[5347]: CRED_DISP pid=5347 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:52.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.149:22-10.0.0.1:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:52.420114 systemd[1]: sshd@30-10.0.0.149:22-10.0.0.1:36330.service: Deactivated successfully. Jun 25 16:34:52.421112 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 16:34:52.422489 systemd-logind[1274]: Session 31 logged out. Waiting for processes to exit. Jun 25 16:34:52.423478 systemd-logind[1274]: Removed session 31. Jun 25 16:34:54.170901 update_engine[1280]: I0625 16:34:54.170809 1280 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 16:34:54.171374 update_engine[1280]: I0625 16:34:54.171137 1280 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 16:34:54.171374 update_engine[1280]: I0625 16:34:54.171358 1280 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 16:34:54.183726 update_engine[1280]: E0625 16:34:54.182971 1280 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 16:34:54.183726 update_engine[1280]: I0625 16:34:54.183254 1280 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 16:34:54.183726 update_engine[1280]: I0625 16:34:54.183264 1280 omaha_request_action.cc:617] Omaha request response: Jun 25 16:34:54.183726 update_engine[1280]: E0625 16:34:54.183462 1280 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 16:34:54.183726 update_engine[1280]: I0625 16:34:54.183537 1280 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 16:34:54.183726 update_engine[1280]: I0625 16:34:54.183545 1280 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 16:34:54.183726 update_engine[1280]: I0625 16:34:54.183549 1280 update_attempter.cc:306] Processing Done. Jun 25 16:34:54.183726 update_engine[1280]: E0625 16:34:54.183562 1280 update_attempter.cc:619] Update failed. Jun 25 16:34:54.187898 update_engine[1280]: I0625 16:34:54.187832 1280 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 16:34:54.187898 update_engine[1280]: I0625 16:34:54.187872 1280 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 16:34:54.187898 update_engine[1280]: I0625 16:34:54.187877 1280 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 16:34:54.188118 update_engine[1280]: I0625 16:34:54.187970 1280 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 16:34:54.188118 update_engine[1280]: I0625 16:34:54.187997 1280 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 16:34:54.188118 update_engine[1280]: I0625 16:34:54.188001 1280 omaha_request_action.cc:272] Request: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: Jun 25 16:34:54.188118 update_engine[1280]: I0625 16:34:54.188006 1280 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 16:34:54.188412 update_engine[1280]: I0625 16:34:54.188247 1280 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 16:34:54.188474 update_engine[1280]: I0625 16:34:54.188446 1280 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 16:34:54.190923 locksmithd[1296]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 16:34:54.198137 update_engine[1280]: E0625 16:34:54.198091 1280 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198450 1280 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198470 1280 omaha_request_action.cc:617] Omaha request response: Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198475 1280 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198479 1280 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198482 1280 update_attempter.cc:306] Processing Done. Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198487 1280 update_attempter.cc:310] Error event sent. Jun 25 16:34:54.201130 update_engine[1280]: I0625 16:34:54.198498 1280 update_check_scheduler.cc:74] Next update check in 48m45s Jun 25 16:34:54.201667 locksmithd[1296]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 16:34:57.426948 systemd[1]: Started sshd@31-10.0.0.149:22-10.0.0.1:41268.service - OpenSSH per-connection server daemon (10.0.0.1:41268). Jun 25 16:34:57.432308 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:34:57.432451 kernel: audit: type=1130 audit(1719333297.426:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.149:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:57.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.149:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:57.486000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.488897 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 41268 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:34:57.493874 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:57.489000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.500306 kernel: audit: type=1101 audit(1719333297.486:867): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.500515 kernel: audit: type=1103 audit(1719333297.489:868): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.500561 kernel: audit: type=1006 audit(1719333297.489:869): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jun 25 16:34:57.489000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7ae6c5d0 a2=3 a3=7fb91792a480 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:57.514121 kernel: audit: type=1300 audit(1719333297.489:869): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7ae6c5d0 a2=3 a3=7fb91792a480 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:57.514254 kernel: audit: type=1327 audit(1719333297.489:869): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:57.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:57.511704 systemd-logind[1274]: New session 32 of user core. Jun 25 16:34:57.528169 systemd[1]: Started session-32.scope - Session 32 of User core. Jun 25 16:34:57.544000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.588675 kernel: audit: type=1105 audit(1719333297.544:870): pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.588872 kernel: audit: type=1103 audit(1719333297.546:871): pid=5376 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.546000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.832781 sshd[5374]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:57.832000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.837455 systemd[1]: sshd@31-10.0.0.149:22-10.0.0.1:41268.service: Deactivated successfully. Jun 25 16:34:57.838477 systemd[1]: session-32.scope: Deactivated successfully. Jun 25 16:34:57.840466 systemd-logind[1274]: Session 32 logged out. Waiting for processes to exit. Jun 25 16:34:57.841968 systemd-logind[1274]: Removed session 32. Jun 25 16:34:57.853818 kernel: audit: type=1106 audit(1719333297.832:872): pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.853995 kernel: audit: type=1104 audit(1719333297.832:873): pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.832000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:34:57.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.149:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:00.117587 systemd[1]: run-containerd-runc-k8s.io-913a3c0d74bcf615c31fe4de1f0e4b60493d82e11f490e5d668fddf448edf1d5-runc.wKCQfF.mount: Deactivated successfully. Jun 25 16:35:02.137199 kubelet[2286]: E0625 16:35:02.137160 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:35:02.852788 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:35:02.852945 kernel: audit: type=1130 audit(1719333302.848:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.149:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:02.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.149:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:02.849658 systemd[1]: Started sshd@32-10.0.0.149:22-10.0.0.1:41278.service - OpenSSH per-connection server daemon (10.0.0.1:41278). Jun 25 16:35:02.896000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.898016 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:35:02.900909 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:02.897000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.907830 kernel: audit: type=1101 audit(1719333302.896:876): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.907961 kernel: audit: type=1103 audit(1719333302.897:877): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.907998 kernel: audit: type=1006 audit(1719333302.897:878): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jun 25 16:35:02.897000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90ac04a0 a2=3 a3=7f938d300480 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:02.917806 kernel: audit: type=1300 audit(1719333302.897:878): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90ac04a0 a2=3 a3=7f938d300480 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:02.917907 kernel: audit: type=1327 audit(1719333302.897:878): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:02.897000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:02.921686 systemd-logind[1274]: New session 33 of user core. Jun 25 16:35:02.928486 systemd[1]: Started session-33.scope - Session 33 of User core. Jun 25 16:35:02.946000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.952000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.963357 kernel: audit: type=1105 audit(1719333302.946:879): pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:02.963419 kernel: audit: type=1103 audit(1719333302.952:880): pid=5414 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:03.121470 sshd[5412]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:03.125000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:03.136431 systemd[1]: sshd@32-10.0.0.149:22-10.0.0.1:41278.service: Deactivated successfully. Jun 25 16:35:03.137259 systemd-logind[1274]: Session 33 logged out. Waiting for processes to exit. Jun 25 16:35:03.137387 systemd[1]: session-33.scope: Deactivated successfully. Jun 25 16:35:03.130000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:03.146214 kernel: audit: type=1106 audit(1719333303.125:881): pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:03.146400 kernel: audit: type=1104 audit(1719333303.130:882): pid=5412 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:35:03.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.149:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:03.150880 systemd-logind[1274]: Removed session 33.