Jul 10 00:42:03.145193 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed Jul 9 23:09:45 -00 2025 Jul 10 00:42:03.145225 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:42:03.145234 kernel: BIOS-provided physical RAM map: Jul 10 00:42:03.145239 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 00:42:03.145245 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 00:42:03.145250 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 00:42:03.145257 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 10 00:42:03.145262 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 10 00:42:03.145269 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 10 00:42:03.145285 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 10 00:42:03.145291 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:42:03.145296 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 00:42:03.145302 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:42:03.145307 kernel: NX (Execute Disable) protection: active Jul 10 00:42:03.145315 kernel: SMBIOS 2.8 present. Jul 10 00:42:03.145323 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 10 00:42:03.145328 kernel: Hypervisor detected: KVM Jul 10 00:42:03.145334 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:42:03.145343 kernel: kvm-clock: cpu 0, msr 9319a001, primary cpu clock Jul 10 00:42:03.145349 kernel: kvm-clock: using sched offset of 3554804827 cycles Jul 10 00:42:03.145356 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:42:03.145362 kernel: tsc: Detected 2794.748 MHz processor Jul 10 00:42:03.145368 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:42:03.145376 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:42:03.145382 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 10 00:42:03.145389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:42:03.145395 kernel: Using GB pages for direct mapping Jul 10 00:42:03.145401 kernel: ACPI: Early table checksum verification disabled Jul 10 00:42:03.145407 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 10 00:42:03.145413 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145419 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145425 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145433 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 10 00:42:03.145439 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145445 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145451 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145457 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:42:03.145463 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 10 00:42:03.145469 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 10 00:42:03.145475 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 10 00:42:03.145485 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 10 00:42:03.145492 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 10 00:42:03.145498 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 10 00:42:03.145505 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 10 00:42:03.145511 kernel: No NUMA configuration found Jul 10 00:42:03.145518 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 10 00:42:03.145526 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 10 00:42:03.145532 kernel: Zone ranges: Jul 10 00:42:03.145539 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:42:03.145545 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 10 00:42:03.145552 kernel: Normal empty Jul 10 00:42:03.145558 kernel: Movable zone start for each node Jul 10 00:42:03.145565 kernel: Early memory node ranges Jul 10 00:42:03.145571 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 00:42:03.145578 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 10 00:42:03.145586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 10 00:42:03.145594 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:42:03.145601 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:42:03.145608 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 10 00:42:03.145614 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:42:03.145621 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:42:03.145627 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:42:03.145634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:42:03.145640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:42:03.145647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:42:03.145683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:42:03.145691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:42:03.145697 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:42:03.145704 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:42:03.145714 kernel: TSC deadline timer available Jul 10 00:42:03.145720 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 10 00:42:03.145727 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:42:03.145733 kernel: kvm-guest: setup PV sched yield Jul 10 00:42:03.145740 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 10 00:42:03.145749 kernel: Booting paravirtualized kernel on KVM Jul 10 00:42:03.145756 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:42:03.145763 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:42:03.145769 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 10 00:42:03.145776 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 10 00:42:03.145782 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:42:03.145788 kernel: kvm-guest: setup async PF for cpu 0 Jul 10 00:42:03.145795 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 10 00:42:03.145801 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:42:03.145809 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:42:03.145816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 10 00:42:03.145822 kernel: Policy zone: DMA32 Jul 10 00:42:03.145830 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:42:03.145837 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:42:03.145844 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:42:03.145850 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:42:03.145857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:42:03.145865 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2275K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) Jul 10 00:42:03.145872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:42:03.145878 kernel: ftrace: allocating 34602 entries in 136 pages Jul 10 00:42:03.145885 kernel: ftrace: allocated 136 pages with 2 groups Jul 10 00:42:03.145891 kernel: rcu: Hierarchical RCU implementation. Jul 10 00:42:03.145898 kernel: rcu: RCU event tracing is enabled. Jul 10 00:42:03.145905 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:42:03.145912 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:42:03.145918 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:42:03.145926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:42:03.145933 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:42:03.145940 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:42:03.145946 kernel: random: crng init done Jul 10 00:42:03.145953 kernel: Console: colour VGA+ 80x25 Jul 10 00:42:03.145959 kernel: printk: console [ttyS0] enabled Jul 10 00:42:03.145966 kernel: ACPI: Core revision 20210730 Jul 10 00:42:03.145972 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:42:03.145979 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:42:03.145987 kernel: x2apic enabled Jul 10 00:42:03.145994 kernel: Switched APIC routing to physical x2apic. Jul 10 00:42:03.146003 kernel: kvm-guest: setup PV IPIs Jul 10 00:42:03.146010 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:42:03.146017 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 10 00:42:03.146025 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 00:42:03.146032 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:42:03.146039 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:42:03.146045 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:42:03.146058 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:42:03.146065 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:42:03.146072 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:42:03.146080 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:42:03.146087 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:42:03.146094 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:42:03.146101 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 10 00:42:03.146108 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:42:03.146115 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:42:03.146123 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:42:03.146130 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:42:03.146137 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 10 00:42:03.146144 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:42:03.146151 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:42:03.146158 kernel: LSM: Security Framework initializing Jul 10 00:42:03.146164 kernel: SELinux: Initializing. Jul 10 00:42:03.146171 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:42:03.146180 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:42:03.146187 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:42:03.146381 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:42:03.146390 kernel: ... version: 0 Jul 10 00:42:03.146396 kernel: ... bit width: 48 Jul 10 00:42:03.146403 kernel: ... generic registers: 6 Jul 10 00:42:03.146410 kernel: ... value mask: 0000ffffffffffff Jul 10 00:42:03.146417 kernel: ... max period: 00007fffffffffff Jul 10 00:42:03.146424 kernel: ... fixed-purpose events: 0 Jul 10 00:42:03.146434 kernel: ... event mask: 000000000000003f Jul 10 00:42:03.146441 kernel: signal: max sigframe size: 1776 Jul 10 00:42:03.146447 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:42:03.146454 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:42:03.146461 kernel: x86: Booting SMP configuration: Jul 10 00:42:03.146468 kernel: .... node #0, CPUs: #1 Jul 10 00:42:03.146474 kernel: kvm-clock: cpu 1, msr 9319a041, secondary cpu clock Jul 10 00:42:03.146481 kernel: kvm-guest: setup async PF for cpu 1 Jul 10 00:42:03.146488 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 10 00:42:03.146496 kernel: #2 Jul 10 00:42:03.146504 kernel: kvm-clock: cpu 2, msr 9319a081, secondary cpu clock Jul 10 00:42:03.146513 kernel: kvm-guest: setup async PF for cpu 2 Jul 10 00:42:03.146522 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 10 00:42:03.146531 kernel: #3 Jul 10 00:42:03.146546 kernel: kvm-clock: cpu 3, msr 9319a0c1, secondary cpu clock Jul 10 00:42:03.146555 kernel: kvm-guest: setup async PF for cpu 3 Jul 10 00:42:03.146563 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 10 00:42:03.146570 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:42:03.146579 kernel: smpboot: Max logical packages: 1 Jul 10 00:42:03.146586 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 00:42:03.146593 kernel: devtmpfs: initialized Jul 10 00:42:03.146599 kernel: x86/mm: Memory block size: 128MB Jul 10 00:42:03.146607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:42:03.146614 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:42:03.146620 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:42:03.146627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:42:03.146634 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:42:03.146642 kernel: audit: type=2000 audit(1752108122.066:1): state=initialized audit_enabled=0 res=1 Jul 10 00:42:03.146649 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:42:03.146670 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:42:03.146676 kernel: cpuidle: using governor menu Jul 10 00:42:03.146683 kernel: ACPI: bus type PCI registered Jul 10 00:42:03.146690 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:42:03.146697 kernel: dca service started, version 1.12.1 Jul 10 00:42:03.146704 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 10 00:42:03.146711 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 10 00:42:03.146720 kernel: PCI: Using configuration type 1 for base access Jul 10 00:42:03.146729 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:42:03.146738 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:42:03.146748 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:42:03.146757 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:42:03.146764 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:42:03.146772 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:42:03.146781 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:42:03.146790 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:42:03.146802 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:42:03.146811 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:42:03.146820 kernel: ACPI: Interpreter enabled Jul 10 00:42:03.146829 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:42:03.146838 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:42:03.146847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:42:03.146856 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:42:03.146863 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:42:03.147006 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:42:03.147090 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:42:03.147163 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:42:03.147172 kernel: PCI host bridge to bus 0000:00 Jul 10 00:42:03.147254 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:42:03.147334 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:42:03.147402 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:42:03.147471 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 10 00:42:03.147536 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:42:03.147602 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 10 00:42:03.147685 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:42:03.147779 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 10 00:42:03.147866 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 10 00:42:03.147941 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 10 00:42:03.148019 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 10 00:42:03.148091 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 10 00:42:03.148165 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:42:03.148247 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:42:03.148332 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 10 00:42:03.148411 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 10 00:42:03.148487 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 10 00:42:03.148589 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 10 00:42:03.148701 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 10 00:42:03.148789 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 10 00:42:03.148876 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 10 00:42:03.148978 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 10 00:42:03.149076 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 10 00:42:03.149193 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 10 00:42:03.149312 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 10 00:42:03.149409 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 10 00:42:03.149511 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 10 00:42:03.149609 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:42:03.149723 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 10 00:42:03.149823 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 10 00:42:03.149923 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 10 00:42:03.150026 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 10 00:42:03.150137 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 10 00:42:03.150148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:42:03.150156 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:42:03.150175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:42:03.150184 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:42:03.150191 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:42:03.150201 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:42:03.150208 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:42:03.150215 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:42:03.150233 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:42:03.150241 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:42:03.150248 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:42:03.150255 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:42:03.150262 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:42:03.150269 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:42:03.150298 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:42:03.150305 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:42:03.150312 kernel: iommu: Default domain type: Translated Jul 10 00:42:03.150319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:42:03.150431 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:42:03.150539 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:42:03.150632 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:42:03.150665 kernel: vgaarb: loaded Jul 10 00:42:03.150676 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:42:03.150683 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:42:03.150690 kernel: PTP clock support registered Jul 10 00:42:03.150697 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:42:03.150704 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:42:03.150717 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 00:42:03.150733 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 10 00:42:03.150744 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:42:03.150751 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:42:03.150761 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:42:03.150768 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:42:03.150775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:42:03.150789 kernel: pnp: PnP ACPI init Jul 10 00:42:03.150917 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 10 00:42:03.150929 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:42:03.150936 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:42:03.150956 kernel: NET: Registered PF_INET protocol family Jul 10 00:42:03.150963 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:42:03.150973 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:42:03.150980 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:42:03.150987 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:42:03.150994 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:42:03.151001 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:42:03.151008 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:42:03.151015 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:42:03.151022 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:42:03.151030 kernel: NET: Registered PF_XDP protocol family Jul 10 00:42:03.151105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:42:03.151174 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:42:03.151240 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:42:03.151317 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 10 00:42:03.151386 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 10 00:42:03.151453 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 10 00:42:03.151462 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:42:03.151469 kernel: Initialise system trusted keyrings Jul 10 00:42:03.151478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:42:03.151485 kernel: Key type asymmetric registered Jul 10 00:42:03.151492 kernel: Asymmetric key parser 'x509' registered Jul 10 00:42:03.151499 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:42:03.151506 kernel: io scheduler mq-deadline registered Jul 10 00:42:03.151513 kernel: io scheduler kyber registered Jul 10 00:42:03.151520 kernel: io scheduler bfq registered Jul 10 00:42:03.151526 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:42:03.151534 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:42:03.151542 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:42:03.151549 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:42:03.151556 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:42:03.151563 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:42:03.151570 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:42:03.151577 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:42:03.151584 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:42:03.151591 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:42:03.151682 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:42:03.151757 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:42:03.151827 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:42:02 UTC (1752108122) Jul 10 00:42:03.151900 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 10 00:42:03.151910 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:42:03.151917 kernel: Segment Routing with IPv6 Jul 10 00:42:03.151924 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:42:03.151930 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:42:03.151937 kernel: Key type dns_resolver registered Jul 10 00:42:03.151946 kernel: IPI shorthand broadcast: enabled Jul 10 00:42:03.151953 kernel: sched_clock: Marking stable (480002540, 101881130)->(598735253, -16851583) Jul 10 00:42:03.151960 kernel: registered taskstats version 1 Jul 10 00:42:03.151967 kernel: Loading compiled-in X.509 certificates Jul 10 00:42:03.151974 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 6ebecdd7757c0df63fc51731f0b99957f4e4af16' Jul 10 00:42:03.151981 kernel: Key type .fscrypt registered Jul 10 00:42:03.151988 kernel: Key type fscrypt-provisioning registered Jul 10 00:42:03.151995 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:42:03.152003 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:42:03.152010 kernel: ima: No architecture policies found Jul 10 00:42:03.152017 kernel: clk: Disabling unused clocks Jul 10 00:42:03.152024 kernel: Freeing unused kernel image (initmem) memory: 47472K Jul 10 00:42:03.152031 kernel: Write protecting the kernel read-only data: 28672k Jul 10 00:42:03.152038 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 10 00:42:03.152045 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Jul 10 00:42:03.152052 kernel: Run /init as init process Jul 10 00:42:03.152059 kernel: with arguments: Jul 10 00:42:03.152067 kernel: /init Jul 10 00:42:03.152074 kernel: with environment: Jul 10 00:42:03.152080 kernel: HOME=/ Jul 10 00:42:03.152087 kernel: TERM=linux Jul 10 00:42:03.152094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:42:03.152106 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:42:03.152115 systemd[1]: Detected virtualization kvm. Jul 10 00:42:03.152123 systemd[1]: Detected architecture x86-64. Jul 10 00:42:03.152132 systemd[1]: Running in initrd. Jul 10 00:42:03.152139 systemd[1]: No hostname configured, using default hostname. Jul 10 00:42:03.152146 systemd[1]: Hostname set to . Jul 10 00:42:03.152154 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:42:03.152161 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:42:03.152168 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:42:03.152176 systemd[1]: Reached target cryptsetup.target. Jul 10 00:42:03.152183 systemd[1]: Reached target paths.target. Jul 10 00:42:03.152190 systemd[1]: Reached target slices.target. Jul 10 00:42:03.152200 systemd[1]: Reached target swap.target. Jul 10 00:42:03.152213 systemd[1]: Reached target timers.target. Jul 10 00:42:03.152222 systemd[1]: Listening on iscsid.socket. Jul 10 00:42:03.152230 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:42:03.152238 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:42:03.152247 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:42:03.152254 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:42:03.152262 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:42:03.152270 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:42:03.152285 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:42:03.152293 systemd[1]: Reached target sockets.target. Jul 10 00:42:03.152301 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:42:03.152308 systemd[1]: Finished network-cleanup.service. Jul 10 00:42:03.152317 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:42:03.152326 systemd[1]: Starting systemd-journald.service... Jul 10 00:42:03.152334 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:42:03.152341 systemd[1]: Starting systemd-resolved.service... Jul 10 00:42:03.152349 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:42:03.152357 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:42:03.152366 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:42:03.152373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:42:03.152381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:42:03.152394 systemd-journald[198]: Journal started Jul 10 00:42:03.152433 systemd-journald[198]: Runtime Journal (/run/log/journal/6b8142feb0034aa0a270d5d43bd8c0c8) is 6.0M, max 48.5M, 42.5M free. Jul 10 00:42:03.137715 systemd-modules-load[199]: Inserted module 'overlay' Jul 10 00:42:03.179518 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:42:03.179534 kernel: Bridge firewalling registered Jul 10 00:42:03.158761 systemd-resolved[200]: Positive Trust Anchors: Jul 10 00:42:03.183678 systemd[1]: Started systemd-journald.service. Jul 10 00:42:03.183703 kernel: audit: type=1130 audit(1752108123.179:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.158772 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:42:03.158803 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:42:03.161120 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 10 00:42:03.179509 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 10 00:42:03.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.192731 systemd[1]: Started systemd-resolved.service. Jul 10 00:42:03.196945 kernel: audit: type=1130 audit(1752108123.191:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.196964 kernel: audit: type=1130 audit(1752108123.196:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.197164 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:42:03.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.201496 systemd[1]: Reached target nss-lookup.target. Jul 10 00:42:03.206232 kernel: audit: type=1130 audit(1752108123.200:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.206251 kernel: SCSI subsystem initialized Jul 10 00:42:03.206934 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:42:03.216100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:42:03.216128 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:42:03.217327 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:42:03.220039 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 10 00:42:03.221387 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:42:03.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.223611 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:42:03.227709 kernel: audit: type=1130 audit(1752108123.222:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.227724 kernel: audit: type=1130 audit(1752108123.226:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.228540 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:42:03.232424 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:42:03.236744 dracut-cmdline[218]: dracut-dracut-053 Jul 10 00:42:03.238822 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:42:03.244453 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:42:03.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.248668 kernel: audit: type=1130 audit(1752108123.245:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.300678 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:42:03.322685 kernel: iscsi: registered transport (tcp) Jul 10 00:42:03.343684 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:42:03.343750 kernel: QLogic iSCSI HBA Driver Jul 10 00:42:03.375335 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:42:03.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.376406 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:42:03.380781 kernel: audit: type=1130 audit(1752108123.374:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.422680 kernel: raid6: avx2x4 gen() 28768 MB/s Jul 10 00:42:03.439680 kernel: raid6: avx2x4 xor() 7212 MB/s Jul 10 00:42:03.456686 kernel: raid6: avx2x2 gen() 31653 MB/s Jul 10 00:42:03.473687 kernel: raid6: avx2x2 xor() 18419 MB/s Jul 10 00:42:03.490687 kernel: raid6: avx2x1 gen() 25127 MB/s Jul 10 00:42:03.507690 kernel: raid6: avx2x1 xor() 15065 MB/s Jul 10 00:42:03.524689 kernel: raid6: sse2x4 gen() 14267 MB/s Jul 10 00:42:03.541687 kernel: raid6: sse2x4 xor() 6970 MB/s Jul 10 00:42:03.558678 kernel: raid6: sse2x2 gen() 16272 MB/s Jul 10 00:42:03.575686 kernel: raid6: sse2x2 xor() 9827 MB/s Jul 10 00:42:03.592681 kernel: raid6: sse2x1 gen() 12537 MB/s Jul 10 00:42:03.610002 kernel: raid6: sse2x1 xor() 7790 MB/s Jul 10 00:42:03.610019 kernel: raid6: using algorithm avx2x2 gen() 31653 MB/s Jul 10 00:42:03.610028 kernel: raid6: .... xor() 18419 MB/s, rmw enabled Jul 10 00:42:03.610691 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:42:03.622673 kernel: xor: automatically using best checksumming function avx Jul 10 00:42:03.712685 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 10 00:42:03.721172 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:42:03.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.725000 audit: BPF prog-id=7 op=LOAD Jul 10 00:42:03.725000 audit: BPF prog-id=8 op=LOAD Jul 10 00:42:03.725673 kernel: audit: type=1130 audit(1752108123.721:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.725960 systemd[1]: Starting systemd-udevd.service... Jul 10 00:42:03.777016 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 10 00:42:03.781072 systemd[1]: Started systemd-udevd.service. Jul 10 00:42:03.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.782803 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:42:03.795038 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 10 00:42:03.823747 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:42:03.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.826820 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:42:03.862602 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:42:03.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:03.894671 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:42:03.899752 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:42:03.899765 kernel: GPT:9289727 != 19775487 Jul 10 00:42:03.899778 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:42:03.899786 kernel: GPT:9289727 != 19775487 Jul 10 00:42:03.899794 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:42:03.899803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:42:03.904682 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:42:03.908671 kernel: libata version 3.00 loaded. Jul 10 00:42:03.916676 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:42:03.918280 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:42:03.918294 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 10 00:42:03.918387 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:42:03.918467 kernel: scsi host0: ahci Jul 10 00:42:03.918559 kernel: scsi host1: ahci Jul 10 00:42:03.918675 kernel: scsi host2: ahci Jul 10 00:42:03.918763 kernel: scsi host3: ahci Jul 10 00:42:03.918847 kernel: scsi host4: ahci Jul 10 00:42:03.918933 kernel: scsi host5: ahci Jul 10 00:42:03.919017 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 10 00:42:03.919028 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 10 00:42:03.919036 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 10 00:42:03.919045 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 10 00:42:03.919053 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 10 00:42:03.919062 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 10 00:42:03.922670 kernel: AVX2 version of gcm_enc/dec engaged. Jul 10 00:42:03.922697 kernel: AES CTR mode by8 optimization enabled Jul 10 00:42:03.925720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:42:03.966962 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jul 10 00:42:03.965021 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:42:03.972739 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:42:03.977002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:42:03.980193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:42:03.983601 systemd[1]: Starting disk-uuid.service... Jul 10 00:42:03.995159 disk-uuid[518]: Primary Header is updated. Jul 10 00:42:03.995159 disk-uuid[518]: Secondary Entries is updated. Jul 10 00:42:03.995159 disk-uuid[518]: Secondary Header is updated. Jul 10 00:42:03.999052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:42:04.002687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:42:04.230701 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:42:04.230784 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:42:04.231676 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:42:04.232689 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:42:04.233675 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:42:04.235053 kernel: ata3.00: applying bridge limits Jul 10 00:42:04.235067 kernel: ata3.00: configured for UDMA/100 Jul 10 00:42:04.235676 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:42:04.239679 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:42:04.239706 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:42:04.275678 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:42:04.293528 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:42:04.293540 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:42:05.065078 disk-uuid[519]: The operation has completed successfully. Jul 10 00:42:05.066330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:42:05.088814 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:42:05.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.088897 systemd[1]: Finished disk-uuid.service. Jul 10 00:42:05.092930 systemd[1]: Starting verity-setup.service... Jul 10 00:42:05.106686 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 10 00:42:05.130758 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:42:05.132437 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:42:05.134746 systemd[1]: Finished verity-setup.service. Jul 10 00:42:05.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.199677 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:42:05.199915 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:42:05.200275 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:42:05.201532 systemd[1]: Starting ignition-setup.service... Jul 10 00:42:05.202479 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:42:05.218997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:42:05.219040 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:42:05.219064 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:42:05.228409 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:42:05.274330 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:42:05.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.276000 audit: BPF prog-id=9 op=LOAD Jul 10 00:42:05.277467 systemd[1]: Starting systemd-networkd.service... Jul 10 00:42:05.297680 systemd-networkd[706]: lo: Link UP Jul 10 00:42:05.297687 systemd-networkd[706]: lo: Gained carrier Jul 10 00:42:05.298379 systemd-networkd[706]: Enumeration completed Jul 10 00:42:05.298517 systemd[1]: Started systemd-networkd.service. Jul 10 00:42:05.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.298641 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:42:05.340491 systemd[1]: Reached target network.target. Jul 10 00:42:05.340901 systemd-networkd[706]: eth0: Link UP Jul 10 00:42:05.340904 systemd-networkd[706]: eth0: Gained carrier Jul 10 00:42:05.342524 systemd[1]: Starting iscsiuio.service... Jul 10 00:42:05.365511 systemd[1]: Started iscsiuio.service. Jul 10 00:42:05.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.367969 systemd[1]: Starting iscsid.service... Jul 10 00:42:05.372309 iscsid[711]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:42:05.372309 iscsid[711]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:42:05.372309 iscsid[711]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:42:05.372309 iscsid[711]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:42:05.372309 iscsid[711]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:42:05.372309 iscsid[711]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:42:05.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.372757 systemd-networkd[706]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:42:05.375106 systemd[1]: Started iscsid.service. Jul 10 00:42:05.380081 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:42:05.389474 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:42:05.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.391085 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:42:05.392646 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:42:05.394297 systemd[1]: Reached target remote-fs.target. Jul 10 00:42:05.396461 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:42:05.405328 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:42:05.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.544560 systemd[1]: Finished ignition-setup.service. Jul 10 00:42:05.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.546956 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:42:05.617077 ignition[726]: Ignition 2.14.0 Jul 10 00:42:05.617090 ignition[726]: Stage: fetch-offline Jul 10 00:42:05.617183 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:05.617195 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:05.617340 ignition[726]: parsed url from cmdline: "" Jul 10 00:42:05.617345 ignition[726]: no config URL provided Jul 10 00:42:05.617352 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:42:05.617362 ignition[726]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:42:05.617384 ignition[726]: op(1): [started] loading QEMU firmware config module Jul 10 00:42:05.617390 ignition[726]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:42:05.674014 ignition[726]: op(1): [finished] loading QEMU firmware config module Jul 10 00:42:05.674071 ignition[726]: QEMU firmware config was not found. Ignoring... Jul 10 00:42:05.712684 ignition[726]: parsing config with SHA512: 294ff331cde52b389846c3bd46bea7fd61717c36fb31dae18564bfbf748e1cad04c053066ce380eddb984ad61172d62d6ec497c911219102677d5c7b522ecba5 Jul 10 00:42:05.720646 unknown[726]: fetched base config from "system" Jul 10 00:42:05.720670 unknown[726]: fetched user config from "qemu" Jul 10 00:42:05.721191 ignition[726]: fetch-offline: fetch-offline passed Jul 10 00:42:05.721264 ignition[726]: Ignition finished successfully Jul 10 00:42:05.724849 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:42:05.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.725380 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:42:05.726411 systemd[1]: Starting ignition-kargs.service... Jul 10 00:42:05.742706 ignition[734]: Ignition 2.14.0 Jul 10 00:42:05.742722 ignition[734]: Stage: kargs Jul 10 00:42:05.742834 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:05.742845 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:05.744126 ignition[734]: kargs: kargs passed Jul 10 00:42:05.744176 ignition[734]: Ignition finished successfully Jul 10 00:42:05.748020 systemd[1]: Finished ignition-kargs.service. Jul 10 00:42:05.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.749750 systemd[1]: Starting ignition-disks.service... Jul 10 00:42:05.760105 ignition[740]: Ignition 2.14.0 Jul 10 00:42:05.760118 ignition[740]: Stage: disks Jul 10 00:42:05.760257 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:05.760271 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:05.762114 ignition[740]: disks: disks passed Jul 10 00:42:05.762156 ignition[740]: Ignition finished successfully Jul 10 00:42:05.765306 systemd[1]: Finished ignition-disks.service. Jul 10 00:42:05.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.767141 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:42:05.767560 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:42:05.770313 systemd[1]: Reached target local-fs.target. Jul 10 00:42:05.770750 systemd[1]: Reached target sysinit.target. Jul 10 00:42:05.772445 systemd[1]: Reached target basic.target. Jul 10 00:42:05.774890 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:42:05.787908 systemd-fsck[748]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 10 00:42:05.794333 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:42:05.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.797334 systemd[1]: Mounting sysroot.mount... Jul 10 00:42:05.805680 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:42:05.806013 systemd[1]: Mounted sysroot.mount. Jul 10 00:42:05.807406 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:42:05.809958 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:42:05.811547 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:42:05.811586 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:42:05.811607 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:42:05.816940 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:42:05.819130 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:42:05.824201 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:42:05.828589 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:42:05.834079 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:42:05.838424 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:42:05.863137 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:42:05.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.864185 systemd[1]: Starting ignition-mount.service... Jul 10 00:42:05.866129 systemd[1]: Starting sysroot-boot.service... Jul 10 00:42:05.871389 bash[799]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:42:05.883462 systemd[1]: Finished sysroot-boot.service. Jul 10 00:42:05.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:05.887435 ignition[801]: INFO : Ignition 2.14.0 Jul 10 00:42:05.887435 ignition[801]: INFO : Stage: mount Jul 10 00:42:05.888954 ignition[801]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:05.888954 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:05.891714 ignition[801]: INFO : mount: mount passed Jul 10 00:42:05.892468 ignition[801]: INFO : Ignition finished successfully Jul 10 00:42:05.893788 systemd[1]: Finished ignition-mount.service. Jul 10 00:42:05.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:06.142879 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:42:06.153815 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jul 10 00:42:06.153858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:42:06.153868 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:42:06.154729 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:42:06.159082 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:42:06.160875 systemd[1]: Starting ignition-files.service... Jul 10 00:42:06.180829 ignition[829]: INFO : Ignition 2.14.0 Jul 10 00:42:06.180829 ignition[829]: INFO : Stage: files Jul 10 00:42:06.182703 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:06.182703 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:06.182703 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:42:06.186195 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:42:06.186195 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:42:06.189111 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:42:06.190527 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:42:06.192348 unknown[829]: wrote ssh authorized keys file for user: core Jul 10 00:42:06.193424 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:42:06.195068 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:42:06.196742 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:42:06.198381 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:42:06.200191 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:42:06.235628 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:42:06.425307 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:42:06.425307 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:42:06.429099 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:42:06.429099 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:42:06.432520 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:42:06.434199 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:42:06.436472 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:42:06.438417 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:42:06.768049 systemd-networkd[706]: eth0: Gained IPv6LL Jul 10 00:42:06.993126 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:42:07.630474 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:42:07.630474 ignition[829]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:42:07.634614 ignition[829]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:42:07.669811 ignition[829]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:42:07.672330 ignition[829]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:42:07.672330 ignition[829]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:42:07.672330 ignition[829]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:42:07.672330 ignition[829]: INFO : files: files passed Jul 10 00:42:07.672330 ignition[829]: INFO : Ignition finished successfully Jul 10 00:42:07.728131 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 10 00:42:07.728156 kernel: audit: type=1130 audit(1752108127.672:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.728193 kernel: audit: type=1130 audit(1752108127.716:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.728207 kernel: audit: type=1130 audit(1752108127.720:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.728219 kernel: audit: type=1131 audit(1752108127.720:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.671186 systemd[1]: Finished ignition-files.service. Jul 10 00:42:07.673211 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:42:07.678158 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:42:07.732932 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:42:07.679046 systemd[1]: Starting ignition-quench.service... Jul 10 00:42:07.735358 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:42:07.680624 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:42:07.716546 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:42:07.716616 systemd[1]: Finished ignition-quench.service. Jul 10 00:42:07.720919 systemd[1]: Reached target ignition-complete.target. Jul 10 00:42:07.728979 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:42:07.741793 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:42:07.741905 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:42:07.750760 kernel: audit: type=1130 audit(1752108127.742:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.750782 kernel: audit: type=1131 audit(1752108127.742:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.743766 systemd[1]: Reached target initrd-fs.target. Jul 10 00:42:07.750776 systemd[1]: Reached target initrd.target. Jul 10 00:42:07.751591 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:42:07.752601 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:42:07.763979 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:42:07.768960 kernel: audit: type=1130 audit(1752108127.763:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.764943 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:42:07.772799 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:42:07.773817 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:42:07.775636 systemd[1]: Stopped target timers.target. Jul 10 00:42:07.777498 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:42:07.783911 kernel: audit: type=1131 audit(1752108127.778:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.777606 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:42:07.819235 kernel: audit: type=1131 audit(1752108127.785:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.819302 kernel: audit: type=1131 audit(1752108127.789:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.820013 iscsid[711]: iscsid shutting down. Jul 10 00:42:07.779399 systemd[1]: Stopped target initrd.target. Jul 10 00:42:07.783967 systemd[1]: Stopped target basic.target. Jul 10 00:42:07.784276 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:42:07.784451 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:42:07.784642 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:42:07.785037 systemd[1]: Stopped target remote-fs.target. Jul 10 00:42:07.785225 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:42:07.785395 systemd[1]: Stopped target sysinit.target. Jul 10 00:42:07.785570 systemd[1]: Stopped target local-fs.target. Jul 10 00:42:07.785945 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:42:07.786103 systemd[1]: Stopped target swap.target. Jul 10 00:42:07.786259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:42:07.786344 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:42:07.786538 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:42:07.789695 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:42:07.789772 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:42:07.789903 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:42:07.789982 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:42:07.793414 systemd[1]: Stopped target paths.target. Jul 10 00:42:07.793518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:42:07.796783 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:42:07.797287 systemd[1]: Stopped target slices.target. Jul 10 00:42:07.797590 systemd[1]: Stopped target sockets.target. Jul 10 00:42:07.798060 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:42:07.798232 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:42:07.798511 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:42:07.798603 systemd[1]: Stopped ignition-files.service. Jul 10 00:42:07.800011 systemd[1]: Stopping ignition-mount.service... Jul 10 00:42:07.800531 systemd[1]: Stopping iscsid.service... Jul 10 00:42:07.800618 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:42:07.800722 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:42:07.801955 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:42:07.802276 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:42:07.802442 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:42:07.803225 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:42:07.803352 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:42:07.807229 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:42:07.807323 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:42:07.808341 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:42:07.808421 systemd[1]: Stopped iscsid.service. Jul 10 00:42:07.809142 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:42:07.809192 systemd[1]: Closed iscsid.socket. Jul 10 00:42:07.812662 systemd[1]: Stopping iscsiuio.service... Jul 10 00:42:07.813214 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:42:07.813371 systemd[1]: Stopped iscsiuio.service. Jul 10 00:42:07.814132 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:42:07.814230 systemd[1]: Closed iscsiuio.socket. Jul 10 00:42:07.831415 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:42:07.933434 ignition[869]: INFO : Ignition 2.14.0 Jul 10 00:42:07.933434 ignition[869]: INFO : Stage: umount Jul 10 00:42:07.935791 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:42:07.935791 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:42:07.935791 ignition[869]: INFO : umount: umount passed Jul 10 00:42:07.935791 ignition[869]: INFO : Ignition finished successfully Jul 10 00:42:07.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.935849 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:42:07.935982 systemd[1]: Stopped ignition-mount.service. Jul 10 00:42:07.938301 systemd[1]: Stopped target network.target. Jul 10 00:42:07.940235 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:42:07.940288 systemd[1]: Stopped ignition-disks.service. Jul 10 00:42:07.942358 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:42:07.942443 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:42:07.944193 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:42:07.944243 systemd[1]: Stopped ignition-setup.service. Jul 10 00:42:07.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.945369 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:42:07.947247 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:42:07.952711 systemd-networkd[706]: eth0: DHCPv6 lease lost Jul 10 00:42:07.959000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:42:07.954533 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:42:07.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.954644 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:42:07.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.957783 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:42:07.957820 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:42:07.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.960452 systemd[1]: Stopping network-cleanup.service... Jul 10 00:42:07.962018 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:42:07.962071 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:42:07.963147 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:42:07.963200 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:42:07.976000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:42:07.965133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:42:07.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.965187 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:42:07.966427 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:42:07.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.970129 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:42:07.970714 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:42:07.970831 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:42:07.977749 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:42:07.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.977855 systemd[1]: Stopped network-cleanup.service. Jul 10 00:42:07.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.980176 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:42:07.980313 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:42:07.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.983224 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:42:07.983268 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:42:07.985709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:42:07.985747 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:42:07.987945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:42:08.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:08.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.987991 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:42:07.989910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:42:07.989944 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:42:07.991960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:42:07.992003 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:42:07.994770 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:42:07.995770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:42:08.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:07.995832 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:42:08.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:08.001022 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:42:08.001092 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:42:08.008869 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:42:08.008989 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:42:08.010420 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:42:08.012207 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:42:08.012298 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:42:08.020000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:42:08.020000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:42:08.020000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:42:08.014274 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:42:08.020209 systemd[1]: Switching root. Jul 10 00:42:08.024000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:42:08.024000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:42:08.034647 systemd-journald[198]: Journal stopped Jul 10 00:42:12.894460 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 10 00:42:12.894511 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:42:12.894523 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:42:12.894537 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:42:12.894551 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:42:12.894565 kernel: SELinux: policy capability open_perms=1 Jul 10 00:42:12.894578 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:42:12.894596 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:42:12.894607 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:42:12.894616 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:42:12.894626 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:42:12.894638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:42:12.894650 systemd[1]: Successfully loaded SELinux policy in 43.804ms. Jul 10 00:42:12.894683 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.155ms. Jul 10 00:42:12.894702 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:42:12.894714 systemd[1]: Detected virtualization kvm. Jul 10 00:42:12.894724 systemd[1]: Detected architecture x86-64. Jul 10 00:42:12.894736 systemd[1]: Detected first boot. Jul 10 00:42:12.894749 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:42:12.894760 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:42:12.894770 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:42:12.894780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:42:12.894791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:42:12.894802 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:42:12.894815 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:42:12.894825 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:42:12.894841 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:42:12.894855 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:42:12.894869 systemd[1]: Created slice system-getty.slice. Jul 10 00:42:12.894883 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:42:12.894897 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:42:12.894909 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:42:12.894920 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:42:12.894930 systemd[1]: Created slice user.slice. Jul 10 00:42:12.894944 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:42:12.894956 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:42:12.894966 systemd[1]: Set up automount boot.automount. Jul 10 00:42:12.894978 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:42:12.894992 systemd[1]: Reached target integritysetup.target. Jul 10 00:42:12.895004 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:42:12.895017 systemd[1]: Reached target remote-fs.target. Jul 10 00:42:12.895027 systemd[1]: Reached target slices.target. Jul 10 00:42:12.895048 systemd[1]: Reached target swap.target. Jul 10 00:42:12.895060 systemd[1]: Reached target torcx.target. Jul 10 00:42:12.895070 systemd[1]: Reached target veritysetup.target. Jul 10 00:42:12.895083 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:42:12.895093 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:42:12.895103 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 10 00:42:12.895112 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:42:12.895123 kernel: audit: type=1400 audit(1752108132.727:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:42:12.895133 kernel: audit: type=1335 audit(1752108132.727:85): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 00:42:12.895145 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:42:12.895161 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:42:12.895175 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:42:12.895189 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:42:12.895204 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:42:12.895217 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:42:12.895227 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:42:12.895237 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:42:12.895248 systemd[1]: Mounting media.mount... Jul 10 00:42:12.895258 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:12.895272 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:42:12.895286 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:42:12.895298 systemd[1]: Mounting tmp.mount... Jul 10 00:42:12.895308 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:42:12.895320 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:42:12.895336 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:42:12.895350 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:42:12.895361 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:42:12.895373 systemd[1]: Starting modprobe@drm.service... Jul 10 00:42:12.895390 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:42:12.895402 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:42:12.895415 systemd[1]: Starting modprobe@loop.service... Jul 10 00:42:12.895429 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:42:12.895444 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:42:12.895459 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:42:12.895473 systemd[1]: Starting systemd-journald.service... Jul 10 00:42:12.895486 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:42:12.895500 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:42:12.895515 kernel: fuse: init (API version 7.34) Jul 10 00:42:12.895529 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:42:12.895542 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:42:12.895556 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:12.895568 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:42:12.895578 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:42:12.895588 kernel: loop: module loaded Jul 10 00:42:12.895597 systemd[1]: Mounted media.mount. Jul 10 00:42:12.895609 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:42:12.895621 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:42:12.895631 systemd[1]: Mounted tmp.mount. Jul 10 00:42:12.895641 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:42:12.895664 kernel: audit: type=1130 audit(1752108132.888:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.895675 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:42:12.895686 kernel: audit: type=1305 audit(1752108132.892:87): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:42:12.895696 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:42:12.895709 systemd-journald[1011]: Journal started Jul 10 00:42:12.895751 systemd-journald[1011]: Runtime Journal (/run/log/journal/6b8142feb0034aa0a270d5d43bd8c0c8) is 6.0M, max 48.5M, 42.5M free. Jul 10 00:42:12.895780 kernel: audit: type=1300 audit(1752108132.892:87): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd3273f670 a2=4000 a3=7ffd3273f70c items=0 ppid=1 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:12.727000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:42:12.727000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 00:42:12.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.892000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:42:12.892000 audit[1011]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd3273f670 a2=4000 a3=7ffd3273f70c items=0 ppid=1 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:12.901650 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:42:12.901707 kernel: audit: type=1327 audit(1752108132.892:87): proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:42:12.892000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:42:12.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.906892 kernel: audit: type=1130 audit(1752108132.892:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.911676 kernel: audit: type=1130 audit(1752108132.907:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.911695 systemd[1]: Started systemd-journald.service. Jul 10 00:42:12.911709 kernel: audit: type=1131 audit(1752108132.907:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.916947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:42:12.917309 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:42:12.919936 kernel: audit: type=1130 audit(1752108132.915:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.920197 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:42:12.920425 systemd[1]: Finished modprobe@drm.service. Jul 10 00:42:12.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.921551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:42:12.921760 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:42:12.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.922913 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:42:12.923125 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:42:12.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.924242 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:42:12.924417 systemd[1]: Finished modprobe@loop.service. Jul 10 00:42:12.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.925759 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:42:12.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.927083 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:42:12.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.928450 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:42:12.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.929773 systemd[1]: Reached target network-pre.target. Jul 10 00:42:12.931811 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:42:12.933618 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:42:12.934450 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:42:12.935975 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:42:12.940242 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:42:12.941494 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:42:12.942698 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:42:12.943640 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:42:12.945709 systemd-journald[1011]: Time spent on flushing to /var/log/journal/6b8142feb0034aa0a270d5d43bd8c0c8 is 12.149ms for 1045 entries. Jul 10 00:42:12.945709 systemd-journald[1011]: System Journal (/var/log/journal/6b8142feb0034aa0a270d5d43bd8c0c8) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:42:13.589563 systemd-journald[1011]: Received client request to flush runtime journal. Jul 10 00:42:13.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:13.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:13.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:13.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:13.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:12.944728 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:42:12.948138 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:42:13.000350 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:42:13.001386 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:42:13.590239 udevadm[1053]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:42:13.002347 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:42:13.004176 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:42:13.074037 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:42:13.075242 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:42:13.077332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:42:13.094848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:42:13.396708 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:42:13.397817 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:42:13.590645 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:42:13.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.234981 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:42:14.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.236966 systemd[1]: Starting systemd-udevd.service... Jul 10 00:42:14.252852 systemd-udevd[1065]: Using default interface naming scheme 'v252'. Jul 10 00:42:14.265543 systemd[1]: Started systemd-udevd.service. Jul 10 00:42:14.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.269148 systemd[1]: Starting systemd-networkd.service... Jul 10 00:42:14.273719 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:42:14.307563 systemd[1]: Started systemd-userdbd.service. Jul 10 00:42:14.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.338728 systemd[1]: Found device dev-ttyS0.device. Jul 10 00:42:14.353682 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 10 00:42:14.362672 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:42:14.371871 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:42:14.371000 audit[1084]: AVC avc: denied { confidentiality } for pid=1084 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 00:42:14.420162 systemd-networkd[1071]: lo: Link UP Jul 10 00:42:14.420440 systemd-networkd[1071]: lo: Gained carrier Jul 10 00:42:14.420916 systemd-networkd[1071]: Enumeration completed Jul 10 00:42:14.421078 systemd[1]: Started systemd-networkd.service. Jul 10 00:42:14.421504 systemd-networkd[1071]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:42:14.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.422888 systemd-networkd[1071]: eth0: Link UP Jul 10 00:42:14.422967 systemd-networkd[1071]: eth0: Gained carrier Jul 10 00:42:14.440838 systemd-networkd[1071]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:42:14.371000 audit[1084]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b9cd8d6f00 a1=338ac a2=7f07cfb55bc5 a3=5 items=110 ppid=1065 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:14.371000 audit: CWD cwd="/" Jul 10 00:42:14.371000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=1 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=2 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=3 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=4 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=5 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=6 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=7 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=8 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=9 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=10 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=11 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=12 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=13 name=(null) inode=13937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=14 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=15 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=16 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=17 name=(null) inode=13939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=18 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=19 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=20 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=21 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=22 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=23 name=(null) inode=13942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=24 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=25 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=26 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=27 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=28 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=29 name=(null) inode=13945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=30 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=31 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=32 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=33 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=34 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=35 name=(null) inode=13948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=36 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=37 name=(null) inode=13949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=38 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=39 name=(null) inode=13950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=40 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=41 name=(null) inode=13951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=42 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=43 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=44 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=45 name=(null) inode=13953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=46 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=47 name=(null) inode=13954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=48 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=49 name=(null) inode=13955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=50 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=51 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=52 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=53 name=(null) inode=13957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=55 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=56 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=57 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=58 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=59 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=60 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=61 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=62 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=63 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=64 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=65 name=(null) inode=13963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=66 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=67 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=68 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=69 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=70 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=71 name=(null) inode=13966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=72 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=73 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=74 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=75 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=76 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=77 name=(null) inode=13969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=78 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=79 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.456707 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:42:14.456983 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 10 00:42:14.457148 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:42:14.371000 audit: PATH item=80 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=81 name=(null) inode=13971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=82 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=83 name=(null) inode=13972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=84 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=85 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=86 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=87 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=88 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=89 name=(null) inode=13975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=90 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=91 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=92 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=93 name=(null) inode=13977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=94 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=95 name=(null) inode=13978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=96 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=97 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=98 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=99 name=(null) inode=13980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=100 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=101 name=(null) inode=13981 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=102 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=103 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=104 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=105 name=(null) inode=13983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=106 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=107 name=(null) inode=13984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PATH item=109 name=(null) inode=13281 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:42:14.371000 audit: PROCTITLE proctitle="(udev-worker)" Jul 10 00:42:14.500681 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:42:14.504707 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:42:14.507984 kernel: kvm: Nested Virtualization enabled Jul 10 00:42:14.508030 kernel: SVM: kvm: Nested Paging enabled Jul 10 00:42:14.508070 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 10 00:42:14.508086 kernel: SVM: Virtual GIF supported Jul 10 00:42:14.567783 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:42:14.595153 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:42:14.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.597405 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:42:14.604980 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:42:14.631355 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:42:14.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.651623 systemd[1]: Reached target cryptsetup.target. Jul 10 00:42:14.653367 systemd[1]: Starting lvm2-activation.service... Jul 10 00:42:14.657257 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:42:14.683411 systemd[1]: Finished lvm2-activation.service. Jul 10 00:42:14.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.684330 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:42:14.685184 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:42:14.685201 systemd[1]: Reached target local-fs.target. Jul 10 00:42:14.685973 systemd[1]: Reached target machines.target. Jul 10 00:42:14.687697 systemd[1]: Starting ldconfig.service... Jul 10 00:42:14.688644 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:42:14.688775 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:14.689712 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:42:14.691408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:42:14.693431 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:42:14.695355 systemd[1]: Starting systemd-sysext.service... Jul 10 00:42:14.696588 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Jul 10 00:42:14.697857 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:42:14.707038 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:42:14.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.712882 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:42:14.716383 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:42:14.716605 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:42:14.727690 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 00:42:14.731818 systemd-fsck[1114]: fsck.fat 4.2 (2021-01-31) Jul 10 00:42:14.731818 systemd-fsck[1114]: /dev/vda1: 790 files, 120731/258078 clusters Jul 10 00:42:14.733383 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:42:14.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:14.754558 systemd[1]: Mounting boot.mount... Jul 10 00:42:14.769585 systemd[1]: Mounted boot.mount. Jul 10 00:42:15.238006 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:42:15.237473 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:42:15.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.275677 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:42:15.279997 (sd-sysext)[1126]: Using extensions 'kubernetes'. Jul 10 00:42:15.280340 (sd-sysext)[1126]: Merged extensions into '/usr'. Jul 10 00:42:15.297034 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:15.298927 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:42:15.299886 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.301035 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:42:15.302761 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:42:15.304514 systemd[1]: Starting modprobe@loop.service... Jul 10 00:42:15.305502 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.305766 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.306040 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:15.309421 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:42:15.310793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:42:15.310933 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:42:15.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.312169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:42:15.312299 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:42:15.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.313684 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:42:15.313857 systemd[1]: Finished modprobe@loop.service. Jul 10 00:42:15.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.315667 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:42:15.315775 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.317848 systemd[1]: Finished systemd-sysext.service. Jul 10 00:42:15.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.320536 systemd[1]: Starting ensure-sysext.service... Jul 10 00:42:15.323099 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:42:15.328993 systemd[1]: Reloading. Jul 10 00:42:15.334684 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:42:15.336553 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:42:15.338058 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:42:15.338255 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:42:15.377629 /usr/lib/systemd/system-generators/torcx-generator[1162]: time="2025-07-10T00:42:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:42:15.377702 /usr/lib/systemd/system-generators/torcx-generator[1162]: time="2025-07-10T00:42:15Z" level=info msg="torcx already run" Jul 10 00:42:15.450690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:42:15.450708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:42:15.469908 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:42:15.520572 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:42:15.527856 systemd[1]: Finished ldconfig.service. Jul 10 00:42:15.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.529047 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:42:15.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.530963 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:42:15.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.534012 systemd[1]: Starting audit-rules.service... Jul 10 00:42:15.536325 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:42:15.538406 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:42:15.540915 systemd[1]: Starting systemd-resolved.service... Jul 10 00:42:15.543496 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:42:15.545567 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:42:15.547265 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:42:15.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.552046 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.554801 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:42:15.556670 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:42:15.558534 systemd[1]: Starting modprobe@loop.service... Jul 10 00:42:15.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.561000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:15.567000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:42:15.567000 audit[1239]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3a6f6990 a2=420 a3=0 items=0 ppid=1211 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:15.567000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:42:15.559309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.571728 augenrules[1239]: No rules Jul 10 00:42:15.559435 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.559570 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:42:15.560387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:42:15.560538 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:42:15.561730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:42:15.561856 systemd[1]: Finished modprobe@loop.service. Jul 10 00:42:15.563098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:42:15.563225 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:42:15.566031 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:42:15.566174 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.568741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.569985 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:42:15.572140 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:42:15.573983 systemd[1]: Starting modprobe@loop.service... Jul 10 00:42:15.574836 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.574935 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.575044 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:42:15.576123 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:42:15.577318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:42:15.577457 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:42:15.578880 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:42:15.581913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:42:15.582058 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:42:15.583540 systemd[1]: Finished audit-rules.service. Jul 10 00:42:15.584822 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:42:15.584960 systemd[1]: Finished modprobe@loop.service. Jul 10 00:42:15.586852 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:42:15.586949 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.588245 systemd[1]: Starting systemd-update-done.service... Jul 10 00:42:15.592236 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.593892 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:42:15.595790 systemd[1]: Starting modprobe@drm.service... Jul 10 00:42:15.597556 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:42:15.599410 systemd[1]: Starting modprobe@loop.service... Jul 10 00:42:15.599910 systemd-networkd[1071]: eth0: Gained IPv6LL Jul 10 00:42:15.601876 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.601989 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.603214 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:42:15.607190 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:42:15.608388 systemd[1]: Finished systemd-update-done.service. Jul 10 00:42:15.609834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:42:15.609983 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:42:15.611316 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:42:15.611454 systemd[1]: Finished modprobe@drm.service. Jul 10 00:42:15.612665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:42:15.612803 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:42:15.614024 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:42:15.614174 systemd[1]: Finished modprobe@loop.service. Jul 10 00:42:15.615322 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:42:15.616829 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:42:15.616914 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.618535 systemd[1]: Finished ensure-sysext.service. Jul 10 00:42:15.635388 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:42:15.636184 systemd-timesyncd[1222]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:42:15.636236 systemd-timesyncd[1222]: Initial clock synchronization to Thu 2025-07-10 00:42:15.695447 UTC. Jul 10 00:42:15.636349 systemd-resolved[1218]: Positive Trust Anchors: Jul 10 00:42:15.636359 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:42:15.636386 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:42:15.637146 systemd[1]: Reached target time-set.target. Jul 10 00:42:15.643722 systemd-resolved[1218]: Defaulting to hostname 'linux'. Jul 10 00:42:15.645117 systemd[1]: Started systemd-resolved.service. Jul 10 00:42:15.646012 systemd[1]: Reached target network.target. Jul 10 00:42:15.646791 systemd[1]: Reached target network-online.target. Jul 10 00:42:15.647626 systemd[1]: Reached target nss-lookup.target. Jul 10 00:42:15.648442 systemd[1]: Reached target sysinit.target. Jul 10 00:42:15.649302 systemd[1]: Started motdgen.path. Jul 10 00:42:15.650028 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:42:15.651238 systemd[1]: Started logrotate.timer. Jul 10 00:42:15.652048 systemd[1]: Started mdadm.timer. Jul 10 00:42:15.652817 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:42:15.653692 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:42:15.653719 systemd[1]: Reached target paths.target. Jul 10 00:42:15.654464 systemd[1]: Reached target timers.target. Jul 10 00:42:15.655548 systemd[1]: Listening on dbus.socket. Jul 10 00:42:15.657406 systemd[1]: Starting docker.socket... Jul 10 00:42:15.659089 systemd[1]: Listening on sshd.socket. Jul 10 00:42:15.659932 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.660199 systemd[1]: Listening on docker.socket. Jul 10 00:42:15.660997 systemd[1]: Reached target sockets.target. Jul 10 00:42:15.661781 systemd[1]: Reached target basic.target. Jul 10 00:42:15.662644 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:42:15.662706 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.662725 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:42:15.663681 systemd[1]: Starting containerd.service... Jul 10 00:42:15.665409 systemd[1]: Starting dbus.service... Jul 10 00:42:15.667088 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:42:15.669221 systemd[1]: Starting extend-filesystems.service... Jul 10 00:42:15.670275 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:42:15.671417 systemd[1]: Starting kubelet.service... Jul 10 00:42:15.672265 jq[1275]: false Jul 10 00:42:15.673513 systemd[1]: Starting motdgen.service... Jul 10 00:42:15.675706 systemd[1]: Starting prepare-helm.service... Jul 10 00:42:15.677678 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:42:15.679767 systemd[1]: Starting sshd-keygen.service... Jul 10 00:42:15.683614 systemd[1]: Starting systemd-logind.service... Jul 10 00:42:15.684573 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:42:15.684629 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:42:15.685856 systemd[1]: Starting update-engine.service... Jul 10 00:42:15.687628 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:42:15.690365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:42:15.690366 dbus-daemon[1274]: [system] SELinux support is enabled Jul 10 00:42:15.690628 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:42:15.691196 systemd[1]: Started dbus.service. Jul 10 00:42:15.692122 jq[1294]: true Jul 10 00:42:15.697104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:42:15.697330 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:42:15.706455 tar[1300]: linux-amd64/helm Jul 10 00:42:15.701087 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:42:15.701118 systemd[1]: Reached target system-config.target. Jul 10 00:42:15.702185 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:42:15.702204 systemd[1]: Reached target user-config.target. Jul 10 00:42:15.720719 jq[1305]: true Jul 10 00:42:15.712545 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:42:15.712792 systemd[1]: Finished motdgen.service. Jul 10 00:42:15.716847 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:15.716863 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:42:15.728802 extend-filesystems[1276]: Found loop1 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found sr0 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda1 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda2 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda3 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found usr Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda4 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda6 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda7 Jul 10 00:42:15.730115 extend-filesystems[1276]: Found vda9 Jul 10 00:42:15.730115 extend-filesystems[1276]: Checking size of /dev/vda9 Jul 10 00:42:15.750324 env[1307]: time="2025-07-10T00:42:15.750258594Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:42:15.759781 update_engine[1293]: I0710 00:42:15.759434 1293 main.cc:92] Flatcar Update Engine starting Jul 10 00:42:15.761254 systemd[1]: Started update-engine.service. Jul 10 00:42:15.764633 update_engine[1293]: I0710 00:42:15.761294 1293 update_check_scheduler.cc:74] Next update check in 7m54s Jul 10 00:42:15.763633 systemd[1]: Started locksmithd.service. Jul 10 00:42:15.770095 env[1307]: time="2025-07-10T00:42:15.769910779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:42:15.770565 env[1307]: time="2025-07-10T00:42:15.770413442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.771449 systemd-logind[1287]: Watching system buttons on /dev/input/event1 (Power Button) Jul 10 00:42:15.771742 systemd-logind[1287]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:42:15.771823 env[1307]: time="2025-07-10T00:42:15.771787529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:42:15.771823 env[1307]: time="2025-07-10T00:42:15.771815000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772064027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772082882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772093843Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772102329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772161179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772382855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772505465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:42:15.772523 env[1307]: time="2025-07-10T00:42:15.772518479Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:42:15.772827 env[1307]: time="2025-07-10T00:42:15.772557823Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:42:15.772827 env[1307]: time="2025-07-10T00:42:15.772568022Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:42:15.773519 systemd-logind[1287]: New seat seat0. Jul 10 00:42:15.780884 systemd[1]: Started systemd-logind.service. Jul 10 00:42:15.819017 locksmithd[1335]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:42:15.830104 extend-filesystems[1276]: Resized partition /dev/vda9 Jul 10 00:42:15.880837 extend-filesystems[1344]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:42:15.943691 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:42:16.013689 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:42:16.159109 extend-filesystems[1344]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:42:16.159109 extend-filesystems[1344]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:42:16.159109 extend-filesystems[1344]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:42:16.163995 extend-filesystems[1276]: Resized filesystem in /dev/vda9 Jul 10 00:42:16.163431 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162463719Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162583161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162617750Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162699806Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162723011Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162762475Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162783004Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162800890Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162818280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162858380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162874165Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.162894291Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.163173496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:42:16.165760 env[1307]: time="2025-07-10T00:42:16.163336570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:42:16.166096 bash[1331]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:42:16.163739 systemd[1]: Finished extend-filesystems.service. Jul 10 00:42:16.166329 env[1307]: time="2025-07-10T00:42:16.165555546Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:42:16.166329 env[1307]: time="2025-07-10T00:42:16.166045192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166329 env[1307]: time="2025-07-10T00:42:16.166067881Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:42:16.165010 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166341373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166360843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166379243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166395836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166409542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166423056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166436420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166449288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.166499 env[1307]: time="2025-07-10T00:42:16.166466851Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167356102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167387169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167403015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167416126Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167434394Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167449635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167491895Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:42:16.171799 env[1307]: time="2025-07-10T00:42:16.167542259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:42:16.169375 systemd[1]: Started containerd.service. Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.167965382Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.168039021Z" level=info msg="Connect containerd service" Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.168096733Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.168780934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.169209274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.169255319Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:42:16.172094 env[1307]: time="2025-07-10T00:42:16.169981718Z" level=info msg="containerd successfully booted in 0.421557s" Jul 10 00:42:16.176201 env[1307]: time="2025-07-10T00:42:16.176130378Z" level=info msg="Start subscribing containerd event" Jul 10 00:42:16.176474 env[1307]: time="2025-07-10T00:42:16.176455093Z" level=info msg="Start recovering state" Jul 10 00:42:16.176696 env[1307]: time="2025-07-10T00:42:16.176631924Z" level=info msg="Start event monitor" Jul 10 00:42:16.176761 env[1307]: time="2025-07-10T00:42:16.176680220Z" level=info msg="Start snapshots syncer" Jul 10 00:42:16.176761 env[1307]: time="2025-07-10T00:42:16.176714738Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:42:16.176761 env[1307]: time="2025-07-10T00:42:16.176739224Z" level=info msg="Start streaming server" Jul 10 00:42:16.285249 tar[1300]: linux-amd64/LICENSE Jul 10 00:42:16.285432 tar[1300]: linux-amd64/README.md Jul 10 00:42:16.290976 systemd[1]: Finished prepare-helm.service. Jul 10 00:42:16.425244 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:42:16.469137 systemd[1]: Finished sshd-keygen.service. Jul 10 00:42:16.471714 systemd[1]: Starting issuegen.service... Jul 10 00:42:16.477845 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:42:16.478059 systemd[1]: Finished issuegen.service. Jul 10 00:42:16.480542 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:42:16.492242 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:42:16.494886 systemd[1]: Started getty@tty1.service. Jul 10 00:42:16.496931 systemd[1]: Started serial-getty@ttyS0.service. Jul 10 00:42:16.498052 systemd[1]: Reached target getty.target. Jul 10 00:42:17.375084 systemd[1]: Started kubelet.service. Jul 10 00:42:17.376287 systemd[1]: Reached target multi-user.target. Jul 10 00:42:17.378626 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:42:17.385910 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:42:17.386139 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:42:17.388352 systemd[1]: Startup finished in 6.056s (kernel) + 9.305s (userspace) = 15.361s. Jul 10 00:42:18.142110 kubelet[1375]: E0710 00:42:18.142008 1375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:42:18.144106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:42:18.144265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:42:25.541074 systemd[1]: Created slice system-sshd.slice. Jul 10 00:42:25.542222 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:53458.service. Jul 10 00:42:25.584218 sshd[1385]: Accepted publickey for core from 10.0.0.1 port 53458 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:25.585784 sshd[1385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:25.593373 systemd[1]: Created slice user-500.slice. Jul 10 00:42:25.594340 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:42:25.595874 systemd-logind[1287]: New session 1 of user core. Jul 10 00:42:25.602764 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:42:25.604001 systemd[1]: Starting user@500.service... Jul 10 00:42:25.607952 (systemd)[1389]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:25.681030 systemd[1389]: Queued start job for default target default.target. Jul 10 00:42:25.681247 systemd[1389]: Reached target paths.target. Jul 10 00:42:25.681261 systemd[1389]: Reached target sockets.target. Jul 10 00:42:25.681273 systemd[1389]: Reached target timers.target. Jul 10 00:42:25.681283 systemd[1389]: Reached target basic.target. Jul 10 00:42:25.681321 systemd[1389]: Reached target default.target. Jul 10 00:42:25.681342 systemd[1389]: Startup finished in 65ms. Jul 10 00:42:25.681593 systemd[1]: Started user@500.service. Jul 10 00:42:25.683069 systemd[1]: Started session-1.scope. Jul 10 00:42:25.733824 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:53474.service. Jul 10 00:42:25.774813 sshd[1399]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:25.776041 sshd[1399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:25.780087 systemd-logind[1287]: New session 2 of user core. Jul 10 00:42:25.780839 systemd[1]: Started session-2.scope. Jul 10 00:42:25.835807 sshd[1399]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:25.838570 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:53490.service. Jul 10 00:42:25.839079 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:53474.service: Deactivated successfully. Jul 10 00:42:25.840218 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:42:25.840386 systemd-logind[1287]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:42:25.841362 systemd-logind[1287]: Removed session 2. Jul 10 00:42:25.879104 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 53490 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:25.880419 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:25.884224 systemd-logind[1287]: New session 3 of user core. Jul 10 00:42:25.885025 systemd[1]: Started session-3.scope. Jul 10 00:42:25.936147 sshd[1405]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:25.939561 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:53500.service. Jul 10 00:42:25.940326 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:53490.service: Deactivated successfully. Jul 10 00:42:25.941409 systemd-logind[1287]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:42:25.941427 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:42:25.942832 systemd-logind[1287]: Removed session 3. Jul 10 00:42:25.980375 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 53500 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:25.981796 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:25.985874 systemd-logind[1287]: New session 4 of user core. Jul 10 00:42:25.986644 systemd[1]: Started session-4.scope. Jul 10 00:42:26.041104 sshd[1412]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:26.044073 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:53514.service. Jul 10 00:42:26.044739 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:53500.service: Deactivated successfully. Jul 10 00:42:26.045771 systemd-logind[1287]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:42:26.045871 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:42:26.046995 systemd-logind[1287]: Removed session 4. Jul 10 00:42:26.082597 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 53514 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:26.083843 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:26.087104 systemd-logind[1287]: New session 5 of user core. Jul 10 00:42:26.087819 systemd[1]: Started session-5.scope. Jul 10 00:42:26.147748 sudo[1424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:42:26.147988 sudo[1424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:42:26.158864 dbus-daemon[1274]: Н8=\xa4U: received setenforce notice (enforcing=1886033888) Jul 10 00:42:26.160586 sudo[1424]: pam_unix(sudo:session): session closed for user root Jul 10 00:42:26.162370 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:26.165003 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:53518.service. Jul 10 00:42:26.165747 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:53514.service: Deactivated successfully. Jul 10 00:42:26.166749 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:42:26.168133 systemd-logind[1287]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:42:26.169008 systemd-logind[1287]: Removed session 5. Jul 10 00:42:26.207622 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 53518 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:26.208844 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:26.211990 systemd-logind[1287]: New session 6 of user core. Jul 10 00:42:26.212790 systemd[1]: Started session-6.scope. Jul 10 00:42:26.266578 sudo[1433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:42:26.266862 sudo[1433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:42:26.269328 sudo[1433]: pam_unix(sudo:session): session closed for user root Jul 10 00:42:26.273195 sudo[1432]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:42:26.273383 sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:42:26.282289 systemd[1]: Stopping audit-rules.service... Jul 10 00:42:26.282000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 00:42:26.283396 auditctl[1436]: No rules Jul 10 00:42:26.283747 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:42:26.283970 systemd[1]: Stopped audit-rules.service. Jul 10 00:42:26.284531 kernel: kauditd_printk_skb: 164 callbacks suppressed Jul 10 00:42:26.284570 kernel: audit: type=1305 audit(1752108146.282:141): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 00:42:26.285583 systemd[1]: Starting audit-rules.service... Jul 10 00:42:26.282000 audit[1436]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc865120b0 a2=420 a3=0 items=0 ppid=1 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:26.291412 kernel: audit: type=1300 audit(1752108146.282:141): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc865120b0 a2=420 a3=0 items=0 ppid=1 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:26.291467 kernel: audit: type=1327 audit(1752108146.282:141): proctitle=2F7362696E2F617564697463746C002D44 Jul 10 00:42:26.282000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 10 00:42:26.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.295865 kernel: audit: type=1131 audit(1752108146.283:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.299824 augenrules[1454]: No rules Jul 10 00:42:26.300371 systemd[1]: Finished audit-rules.service. Jul 10 00:42:26.301163 sudo[1432]: pam_unix(sudo:session): session closed for user root Jul 10 00:42:26.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.302368 sshd[1426]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:26.300000 audit[1432]: USER_END pid=1432 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.309496 kernel: audit: type=1130 audit(1752108146.299:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.309528 kernel: audit: type=1106 audit(1752108146.300:144): pid=1432 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.309546 kernel: audit: type=1104 audit(1752108146.300:145): pid=1432 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.300000 audit[1432]: CRED_DISP pid=1432 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.305983 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:53526.service. Jul 10 00:42:26.306433 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:53518.service: Deactivated successfully. Jul 10 00:42:26.309629 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:42:26.309975 systemd-logind[1287]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:42:26.310703 systemd-logind[1287]: Removed session 6. Jul 10 00:42:26.313321 kernel: audit: type=1106 audit(1752108146.303:146): pid=1426 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.303000 audit[1426]: USER_END pid=1426 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.303000 audit[1426]: CRED_DISP pid=1426 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.320997 kernel: audit: type=1104 audit(1752108146.303:147): pid=1426 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.321034 kernel: audit: type=1130 audit(1752108146.305:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.99:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.99:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.99:22-10.0.0.1:53518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.347000 audit[1460]: USER_ACCT pid=1460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.348290 sshd[1460]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:42:26.348000 audit[1460]: CRED_ACQ pid=1460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.348000 audit[1460]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff18cb4e00 a2=3 a3=0 items=0 ppid=1 pid=1460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:26.348000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:42:26.349375 sshd[1460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:42:26.352540 systemd-logind[1287]: New session 7 of user core. Jul 10 00:42:26.353245 systemd[1]: Started session-7.scope. Jul 10 00:42:26.356000 audit[1460]: USER_START pid=1460 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.358000 audit[1464]: CRED_ACQ pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:42:26.404000 audit[1465]: USER_ACCT pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.405606 sudo[1465]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:42:26.405000 audit[1465]: CRED_REFR pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.405801 sudo[1465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:42:26.406000 audit[1465]: USER_START pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:42:26.440705 systemd[1]: Starting docker.service... Jul 10 00:42:26.502035 env[1477]: time="2025-07-10T00:42:26.501962916Z" level=info msg="Starting up" Jul 10 00:42:26.504968 env[1477]: time="2025-07-10T00:42:26.504944975Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:42:26.505054 env[1477]: time="2025-07-10T00:42:26.505034326Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:42:26.505140 env[1477]: time="2025-07-10T00:42:26.505119140Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:42:26.505240 env[1477]: time="2025-07-10T00:42:26.505210218Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:42:26.508163 env[1477]: time="2025-07-10T00:42:26.508133080Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:42:26.508250 env[1477]: time="2025-07-10T00:42:26.508218607Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:42:26.508385 env[1477]: time="2025-07-10T00:42:26.508355027Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:42:26.508457 env[1477]: time="2025-07-10T00:42:26.508438205Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:42:27.080307 env[1477]: time="2025-07-10T00:42:27.080244961Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 10 00:42:27.080307 env[1477]: time="2025-07-10T00:42:27.080282054Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 10 00:42:27.080674 env[1477]: time="2025-07-10T00:42:27.080625813Z" level=info msg="Loading containers: start." Jul 10 00:42:27.138000 audit[1511]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.138000 audit[1511]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffceeb2e40 a2=0 a3=7fffceeb2e2c items=0 ppid=1477 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 10 00:42:27.140000 audit[1513]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.140000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc89b18c80 a2=0 a3=7ffc89b18c6c items=0 ppid=1477 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.140000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 10 00:42:27.142000 audit[1515]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.142000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe5bab9cf0 a2=0 a3=7ffe5bab9cdc items=0 ppid=1477 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 00:42:27.143000 audit[1517]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.143000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeefaf1360 a2=0 a3=7ffeefaf134c items=0 ppid=1477 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 00:42:27.145000 audit[1519]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.145000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd46142490 a2=0 a3=7ffd4614247c items=0 ppid=1477 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.145000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 10 00:42:27.159000 audit[1524]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.159000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff35730720 a2=0 a3=7fff3573070c items=0 ppid=1477 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.159000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 10 00:42:27.167000 audit[1526]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.167000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb07b9330 a2=0 a3=7fffb07b931c items=0 ppid=1477 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.167000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 10 00:42:27.169000 audit[1528]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.169000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdc41eb020 a2=0 a3=7ffdc41eb00c items=0 ppid=1477 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.169000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 10 00:42:27.171000 audit[1530]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.171000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc02cad970 a2=0 a3=7ffc02cad95c items=0 ppid=1477 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.171000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:42:27.179000 audit[1534]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.179000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc7b287520 a2=0 a3=7ffc7b28750c items=0 ppid=1477 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:42:27.184000 audit[1535]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.184000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffe2dbed60 a2=0 a3=7fffe2dbed4c items=0 ppid=1477 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.184000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:42:27.253675 kernel: Initializing XFRM netlink socket Jul 10 00:42:27.280977 env[1477]: time="2025-07-10T00:42:27.280936550Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:42:27.300000 audit[1543]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.300000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc2e136410 a2=0 a3=7ffc2e1363fc items=0 ppid=1477 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.300000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 10 00:42:27.315000 audit[1546]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.315000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe6de5bae0 a2=0 a3=7ffe6de5bacc items=0 ppid=1477 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.315000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 10 00:42:27.318000 audit[1549]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.318000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdadfacfa0 a2=0 a3=7ffdadfacf8c items=0 ppid=1477 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.318000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 10 00:42:27.320000 audit[1551]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.320000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff4b8fba30 a2=0 a3=7fff4b8fba1c items=0 ppid=1477 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.320000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 10 00:42:27.322000 audit[1553]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.322000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff5e0830d0 a2=0 a3=7fff5e0830bc items=0 ppid=1477 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 10 00:42:27.324000 audit[1555]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.324000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd331b26b0 a2=0 a3=7ffd331b269c items=0 ppid=1477 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.324000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 10 00:42:27.325000 audit[1557]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.325000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdfbc4bcf0 a2=0 a3=7ffdfbc4bcdc items=0 ppid=1477 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.325000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 10 00:42:27.332000 audit[1560]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.332000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc93b0f4f0 a2=0 a3=7ffc93b0f4dc items=0 ppid=1477 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 10 00:42:27.334000 audit[1562]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.334000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffcbcff1900 a2=0 a3=7ffcbcff18ec items=0 ppid=1477 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.334000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 00:42:27.336000 audit[1564]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.336000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe0d54a130 a2=0 a3=7ffe0d54a11c items=0 ppid=1477 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 00:42:27.337000 audit[1566]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.337000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffda8bb48e0 a2=0 a3=7ffda8bb48cc items=0 ppid=1477 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 10 00:42:27.338834 systemd-networkd[1071]: docker0: Link UP Jul 10 00:42:27.974000 audit[1570]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.974000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff25bd6660 a2=0 a3=7fff25bd664c items=0 ppid=1477 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:42:27.979000 audit[1571]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:27.979000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff3806a420 a2=0 a3=7fff3806a40c items=0 ppid=1477 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:27.979000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:42:27.980079 env[1477]: time="2025-07-10T00:42:27.980038875Z" level=info msg="Loading containers: done." Jul 10 00:42:28.135302 env[1477]: time="2025-07-10T00:42:28.135240194Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:42:28.135490 env[1477]: time="2025-07-10T00:42:28.135463967Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:42:28.135615 env[1477]: time="2025-07-10T00:42:28.135594506Z" level=info msg="Daemon has completed initialization" Jul 10 00:42:28.156849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:42:28.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:28.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:28.156999 systemd[1]: Stopped kubelet.service. Jul 10 00:42:28.158551 systemd[1]: Starting kubelet.service... Jul 10 00:42:28.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:28.287711 systemd[1]: Started kubelet.service. Jul 10 00:42:28.694567 kubelet[1587]: E0710 00:42:28.694494 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:42:28.697453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:42:28.697609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:42:28.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 00:42:29.466991 systemd[1]: Started docker.service. Jul 10 00:42:29.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:29.476137 env[1477]: time="2025-07-10T00:42:29.476058810Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:42:30.408987 env[1307]: time="2025-07-10T00:42:30.408942685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:42:31.097247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634264975.mount: Deactivated successfully. Jul 10 00:42:33.425515 env[1307]: time="2025-07-10T00:42:33.425436023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:33.427366 env[1307]: time="2025-07-10T00:42:33.427310731Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:33.429153 env[1307]: time="2025-07-10T00:42:33.429109288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:33.430728 env[1307]: time="2025-07-10T00:42:33.430698323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:33.431451 env[1307]: time="2025-07-10T00:42:33.431406492Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:42:33.432198 env[1307]: time="2025-07-10T00:42:33.432153624Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:42:35.148188 env[1307]: time="2025-07-10T00:42:35.148121344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:35.149967 env[1307]: time="2025-07-10T00:42:35.149934686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:35.152146 env[1307]: time="2025-07-10T00:42:35.152095443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:35.155411 env[1307]: time="2025-07-10T00:42:35.155377568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:35.156231 env[1307]: time="2025-07-10T00:42:35.156190629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:42:35.156811 env[1307]: time="2025-07-10T00:42:35.156748482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:42:36.872801 env[1307]: time="2025-07-10T00:42:36.872715646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:36.876054 env[1307]: time="2025-07-10T00:42:36.875958308Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:36.878284 env[1307]: time="2025-07-10T00:42:36.878235324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:36.880413 env[1307]: time="2025-07-10T00:42:36.880365590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:36.881242 env[1307]: time="2025-07-10T00:42:36.881209194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:42:36.881992 env[1307]: time="2025-07-10T00:42:36.881893277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:42:38.376537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038494414.mount: Deactivated successfully. Jul 10 00:42:38.906915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:42:38.907111 systemd[1]: Stopped kubelet.service. Jul 10 00:42:38.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:38.908577 systemd[1]: Starting kubelet.service... Jul 10 00:42:38.913630 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 10 00:42:38.913711 kernel: audit: type=1130 audit(1752108158.906:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:38.913742 kernel: audit: type=1131 audit(1752108158.906:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:38.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:39.004459 systemd[1]: Started kubelet.service. Jul 10 00:42:39.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:39.008688 kernel: audit: type=1130 audit(1752108159.004:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:39.503807 kubelet[1632]: E0710 00:42:39.503729 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:42:39.505679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:42:39.505853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:42:39.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 00:42:39.509688 kernel: audit: type=1131 audit(1752108159.505:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 00:42:39.788113 env[1307]: time="2025-07-10T00:42:39.787940142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:39.790317 env[1307]: time="2025-07-10T00:42:39.790282378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:39.791913 env[1307]: time="2025-07-10T00:42:39.791883550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:39.793287 env[1307]: time="2025-07-10T00:42:39.793229967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:39.793575 env[1307]: time="2025-07-10T00:42:39.793537449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:42:39.794134 env[1307]: time="2025-07-10T00:42:39.794102473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:42:41.217816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753684395.mount: Deactivated successfully. Jul 10 00:42:42.698350 env[1307]: time="2025-07-10T00:42:42.698260244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:42.700240 env[1307]: time="2025-07-10T00:42:42.700165494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:42.702517 env[1307]: time="2025-07-10T00:42:42.702468263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:42.704817 env[1307]: time="2025-07-10T00:42:42.704761180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:42.705725 env[1307]: time="2025-07-10T00:42:42.705682549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:42:42.706467 env[1307]: time="2025-07-10T00:42:42.706421565Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:42:43.229249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257263557.mount: Deactivated successfully. Jul 10 00:42:43.235524 env[1307]: time="2025-07-10T00:42:43.235482680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:43.237423 env[1307]: time="2025-07-10T00:42:43.237373506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:43.239105 env[1307]: time="2025-07-10T00:42:43.239065269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:43.240484 env[1307]: time="2025-07-10T00:42:43.240448762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:43.241073 env[1307]: time="2025-07-10T00:42:43.241042535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:42:43.241562 env[1307]: time="2025-07-10T00:42:43.241511679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:42:43.781562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851443994.mount: Deactivated successfully. Jul 10 00:42:47.320648 env[1307]: time="2025-07-10T00:42:47.320584897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:47.322713 env[1307]: time="2025-07-10T00:42:47.322678159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:47.324529 env[1307]: time="2025-07-10T00:42:47.324496203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:47.327160 env[1307]: time="2025-07-10T00:42:47.327133480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:47.328024 env[1307]: time="2025-07-10T00:42:47.327993654Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:42:49.656943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:42:49.657132 systemd[1]: Stopped kubelet.service. Jul 10 00:42:49.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.658915 systemd[1]: Starting kubelet.service... Jul 10 00:42:49.664854 kernel: audit: type=1130 audit(1752108169.656:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.664958 kernel: audit: type=1131 audit(1752108169.656:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.747439 systemd[1]: Started kubelet.service. Jul 10 00:42:49.751676 kernel: audit: type=1130 audit(1752108169.747:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.780126 systemd[1]: Stopping kubelet.service... Jul 10 00:42:49.781336 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:42:49.781561 systemd[1]: Stopped kubelet.service. Jul 10 00:42:49.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.783614 systemd[1]: Starting kubelet.service... Jul 10 00:42:49.786677 kernel: audit: type=1131 audit(1752108169.781:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:49.803917 systemd[1]: Reloading. Jul 10 00:42:49.865777 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-07-10T00:42:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:42:49.866138 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-07-10T00:42:49Z" level=info msg="torcx already run" Jul 10 00:42:50.580329 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:42:50.580346 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:42:50.600910 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:42:50.677501 systemd[1]: Started kubelet.service. Jul 10 00:42:50.679264 systemd[1]: Stopping kubelet.service... Jul 10 00:42:50.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.679552 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:42:50.679902 systemd[1]: Stopped kubelet.service. Jul 10 00:42:50.681394 systemd[1]: Starting kubelet.service... Jul 10 00:42:50.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.685165 kernel: audit: type=1130 audit(1752108170.676:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.685218 kernel: audit: type=1131 audit(1752108170.678:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.770217 systemd[1]: Started kubelet.service. Jul 10 00:42:50.776029 kernel: audit: type=1130 audit(1752108170.768:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:50.800246 kubelet[1766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:42:50.800246 kubelet[1766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:42:50.800246 kubelet[1766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:42:50.800696 kubelet[1766]: I0710 00:42:50.800305 1766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:42:50.988456 kubelet[1766]: I0710 00:42:50.988386 1766 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:42:50.988456 kubelet[1766]: I0710 00:42:50.988423 1766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:42:50.988731 kubelet[1766]: I0710 00:42:50.988684 1766 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:42:51.008833 kubelet[1766]: E0710 00:42:51.008750 1766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:51.009632 kubelet[1766]: I0710 00:42:51.009595 1766 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:42:51.016041 kubelet[1766]: E0710 00:42:51.016006 1766 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:42:51.016103 kubelet[1766]: I0710 00:42:51.016042 1766 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:42:51.021404 kubelet[1766]: I0710 00:42:51.021350 1766 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:42:51.021839 kubelet[1766]: I0710 00:42:51.021807 1766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:42:51.021986 kubelet[1766]: I0710 00:42:51.021954 1766 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:42:51.022190 kubelet[1766]: I0710 00:42:51.021980 1766 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:42:51.022329 kubelet[1766]: I0710 00:42:51.022207 1766 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:42:51.022329 kubelet[1766]: I0710 00:42:51.022217 1766 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:42:51.022403 kubelet[1766]: I0710 00:42:51.022353 1766 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:42:51.028520 kubelet[1766]: I0710 00:42:51.028458 1766 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:42:51.028520 kubelet[1766]: I0710 00:42:51.028533 1766 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:42:51.028772 kubelet[1766]: I0710 00:42:51.028604 1766 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:42:51.028772 kubelet[1766]: I0710 00:42:51.028640 1766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:42:51.034798 kubelet[1766]: W0710 00:42:51.034729 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:51.035012 kubelet[1766]: E0710 00:42:51.034822 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:51.035012 kubelet[1766]: W0710 00:42:51.034889 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:51.035012 kubelet[1766]: E0710 00:42:51.034917 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:51.036211 kubelet[1766]: I0710 00:42:51.036172 1766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:42:51.036799 kubelet[1766]: I0710 00:42:51.036778 1766 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:42:51.036927 kubelet[1766]: W0710 00:42:51.036872 1766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:42:51.040597 kubelet[1766]: I0710 00:42:51.040558 1766 server.go:1274] "Started kubelet" Jul 10 00:42:51.040904 kubelet[1766]: I0710 00:42:51.040876 1766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:42:51.041235 kubelet[1766]: I0710 00:42:51.041209 1766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:42:51.041567 kubelet[1766]: I0710 00:42:51.041548 1766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:42:51.041000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:51.043245 kubelet[1766]: I0710 00:42:51.043072 1766 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 00:42:51.043245 kubelet[1766]: I0710 00:42:51.043103 1766 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 00:42:51.043245 kubelet[1766]: I0710 00:42:51.043163 1766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:42:51.043597 kubelet[1766]: I0710 00:42:51.043580 1766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:42:51.044457 kubelet[1766]: I0710 00:42:51.044423 1766 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:42:51.045450 kubelet[1766]: I0710 00:42:51.045424 1766 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:42:51.045556 kubelet[1766]: E0710 00:42:51.045543 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.046070 kubelet[1766]: I0710 00:42:51.046052 1766 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:42:51.046119 kubelet[1766]: I0710 00:42:51.046115 1766 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:42:51.041000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:51.047146 kubelet[1766]: W0710 00:42:51.046382 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:51.047146 kubelet[1766]: E0710 00:42:51.046420 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:51.047146 kubelet[1766]: E0710 00:42:51.046464 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Jul 10 00:42:51.047146 kubelet[1766]: I0710 00:42:51.046835 1766 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:42:51.047146 kubelet[1766]: I0710 00:42:51.046907 1766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:42:51.048059 kernel: audit: type=1400 audit(1752108171.041:198): avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:51.048113 kernel: audit: type=1401 audit(1752108171.041:198): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:51.048135 kernel: audit: type=1300 audit(1752108171.041:198): arch=c000003e syscall=188 success=no exit=-22 a0=c000d9a330 a1=c000c71308 a2=c000d9a300 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.041000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d9a330 a1=c000c71308 a2=c000d9a300 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.053300 kubelet[1766]: E0710 00:42:51.053274 1766 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:42:51.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:51.041000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:51.041000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:51.041000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d9c4a0 a1=c000c71320 a2=c000d9a3c0 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:51.054198 kubelet[1766]: I0710 00:42:51.053728 1766 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:42:51.055252 kubelet[1766]: E0710 00:42:51.054250 1766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bd1936ee27e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:42:51.040532449 +0000 UTC m=+0.266126109,LastTimestamp:2025-07-10 00:42:51.040532449 +0000 UTC m=+0.266126109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:42:51.054000 audit[1779]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.054000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd6bc05450 a2=0 a3=7ffd6bc0543c items=0 ppid=1766 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 00:42:51.055000 audit[1780]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.055000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb5f48d90 a2=0 a3=7ffeb5f48d7c items=0 ppid=1766 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 00:42:51.058000 audit[1782]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.058000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcbb2e5460 a2=0 a3=7ffcbb2e544c items=0 ppid=1766 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:42:51.059000 audit[1784]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.059000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd87b9d710 a2=0 a3=7ffd87b9d6fc items=0 ppid=1766 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:42:51.065000 audit[1789]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.065000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff8c3eabd0 a2=0 a3=7fff8c3eabbc items=0 ppid=1766 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 10 00:42:51.067646 kubelet[1766]: I0710 00:42:51.067595 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:42:51.066000 audit[1790]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:42:51.066000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe00f65e40 a2=0 a3=7ffe00f65e2c items=0 ppid=1766 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 00:42:51.069062 kubelet[1766]: I0710 00:42:51.069039 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:42:51.069118 kubelet[1766]: I0710 00:42:51.069079 1766 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:42:51.069118 kubelet[1766]: I0710 00:42:51.069106 1766 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:42:51.069179 kubelet[1766]: E0710 00:42:51.069148 1766 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:42:51.067000 audit[1792]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.067000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc312f6b60 a2=0 a3=7ffc312f6b4c items=0 ppid=1766 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 00:42:51.068000 audit[1793]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:42:51.068000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd49d72a70 a2=0 a3=7ffd49d72a5c items=0 ppid=1766 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 00:42:51.068000 audit[1795]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.068000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5d4df320 a2=0 a3=7ffc5d4df30c items=0 ppid=1766 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 00:42:51.070988 kubelet[1766]: I0710 00:42:51.070955 1766 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:42:51.070988 kubelet[1766]: I0710 00:42:51.070976 1766 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:42:51.071082 kubelet[1766]: I0710 00:42:51.071000 1766 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:42:51.069000 audit[1796]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:42:51.069000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd5e9cd80 a2=0 a3=7ffdd5e9cd6c items=0 ppid=1766 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 00:42:51.071506 kubelet[1766]: W0710 00:42:51.071456 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:51.071574 kubelet[1766]: E0710 00:42:51.071514 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:51.070000 audit[1798]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:42:51.070000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe4d873ee0 a2=0 a3=7ffe4d873ecc items=0 ppid=1766 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 00:42:51.071000 audit[1799]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:42:51.071000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc93f0850 a2=0 a3=7ffcc93f083c items=0 ppid=1766 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 00:42:51.146572 kubelet[1766]: E0710 00:42:51.146521 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.169897 kubelet[1766]: E0710 00:42:51.169844 1766 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:42:51.247209 kubelet[1766]: E0710 00:42:51.247040 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.247547 kubelet[1766]: E0710 00:42:51.247493 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Jul 10 00:42:51.348093 kubelet[1766]: E0710 00:42:51.348070 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.370331 kubelet[1766]: E0710 00:42:51.370264 1766 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:42:51.448913 kubelet[1766]: E0710 00:42:51.448817 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.550118 kubelet[1766]: E0710 00:42:51.549980 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:51.567043 kubelet[1766]: I0710 00:42:51.566974 1766 policy_none.go:49] "None policy: Start" Jul 10 00:42:51.567927 kubelet[1766]: I0710 00:42:51.567905 1766 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:42:51.568063 kubelet[1766]: I0710 00:42:51.568037 1766 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:42:51.573745 kubelet[1766]: I0710 00:42:51.573696 1766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:42:51.572000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:51.572000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:51.572000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d34ba0 a1=c0009cd740 a2=c000d34b70 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:51.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:51.574108 kubelet[1766]: I0710 00:42:51.573790 1766 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 00:42:51.574108 kubelet[1766]: I0710 00:42:51.573937 1766 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:42:51.574108 kubelet[1766]: I0710 00:42:51.573953 1766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:42:51.574431 kubelet[1766]: I0710 00:42:51.574406 1766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:42:51.575948 kubelet[1766]: E0710 00:42:51.575926 1766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:42:51.649140 kubelet[1766]: E0710 00:42:51.648940 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Jul 10 00:42:51.677472 kubelet[1766]: I0710 00:42:51.677413 1766 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:51.678434 kubelet[1766]: E0710 00:42:51.678377 1766 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 10 00:42:51.864071 kubelet[1766]: I0710 00:42:51.863936 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:51.864071 kubelet[1766]: I0710 00:42:51.863976 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:51.864071 kubelet[1766]: I0710 00:42:51.863994 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:51.864071 kubelet[1766]: I0710 00:42:51.864008 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:51.864071 kubelet[1766]: I0710 00:42:51.864022 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:42:51.864836 kubelet[1766]: I0710 00:42:51.864033 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:51.864836 kubelet[1766]: I0710 00:42:51.864053 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:51.864836 kubelet[1766]: I0710 00:42:51.864067 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:51.864836 kubelet[1766]: I0710 00:42:51.864080 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:51.880247 kubelet[1766]: I0710 00:42:51.880208 1766 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:51.880771 kubelet[1766]: E0710 00:42:51.880731 1766 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 10 00:42:52.079999 kubelet[1766]: E0710 00:42:52.079956 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:52.080993 env[1307]: time="2025-07-10T00:42:52.080951613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:42:52.148561 kubelet[1766]: W0710 00:42:52.148455 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:52.148561 kubelet[1766]: E0710 00:42:52.148516 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:52.164023 kubelet[1766]: E0710 00:42:52.163975 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:52.164099 kubelet[1766]: E0710 00:42:52.164061 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:52.164489 env[1307]: time="2025-07-10T00:42:52.164457118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:42:52.164581 env[1307]: time="2025-07-10T00:42:52.164455775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa6d811a510c118fb18eee19f419e4f7,Namespace:kube-system,Attempt:0,}" Jul 10 00:42:52.281965 kubelet[1766]: I0710 00:42:52.281932 1766 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:52.282242 kubelet[1766]: E0710 00:42:52.282200 1766 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 10 00:42:52.306611 kubelet[1766]: W0710 00:42:52.306539 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:52.306694 kubelet[1766]: E0710 00:42:52.306604 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:52.449838 kubelet[1766]: E0710 00:42:52.449784 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Jul 10 00:42:52.559124 kubelet[1766]: W0710 00:42:52.559052 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:52.559124 kubelet[1766]: E0710 00:42:52.559121 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:52.591534 kubelet[1766]: W0710 00:42:52.591413 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:52.591534 kubelet[1766]: E0710 00:42:52.591522 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:53.083950 kubelet[1766]: I0710 00:42:53.083907 1766 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:53.084375 kubelet[1766]: E0710 00:42:53.084240 1766 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 10 00:42:53.130239 kubelet[1766]: E0710 00:42:53.130187 1766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:53.848997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186616185.mount: Deactivated successfully. Jul 10 00:42:53.857499 env[1307]: time="2025-07-10T00:42:53.857453642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.860250 env[1307]: time="2025-07-10T00:42:53.860183284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.861880 env[1307]: time="2025-07-10T00:42:53.861843194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.862860 env[1307]: time="2025-07-10T00:42:53.862802951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.864598 env[1307]: time="2025-07-10T00:42:53.864567797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.865701 env[1307]: time="2025-07-10T00:42:53.865678672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.865801 kubelet[1766]: W0710 00:42:53.865755 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:53.865862 kubelet[1766]: E0710 00:42:53.865817 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:53.866791 env[1307]: time="2025-07-10T00:42:53.866765489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.868210 env[1307]: time="2025-07-10T00:42:53.868176016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.870109 env[1307]: time="2025-07-10T00:42:53.870077362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.870742 env[1307]: time="2025-07-10T00:42:53.870704121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.872306 env[1307]: time="2025-07-10T00:42:53.872285335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.874573 env[1307]: time="2025-07-10T00:42:53.874547634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:42:53.910008 env[1307]: time="2025-07-10T00:42:53.909903776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:42:53.910008 env[1307]: time="2025-07-10T00:42:53.910007952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:42:53.910240 env[1307]: time="2025-07-10T00:42:53.910032350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:42:53.910273 env[1307]: time="2025-07-10T00:42:53.910226544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4b4d0f16301652d6e382865b4a9088f24ae55598e5b102a293f41373e8194e1 pid=1808 runtime=io.containerd.runc.v2 Jul 10 00:42:53.921026 env[1307]: time="2025-07-10T00:42:53.920906924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:42:53.921026 env[1307]: time="2025-07-10T00:42:53.921001001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:42:53.921026 env[1307]: time="2025-07-10T00:42:53.921024306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:42:53.922085 env[1307]: time="2025-07-10T00:42:53.922010775Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e2dc3f1dfbbd50ec448255d6aaaaee98cb85041d456b2bc4310cdda96a1659c pid=1825 runtime=io.containerd.runc.v2 Jul 10 00:42:53.953958 env[1307]: time="2025-07-10T00:42:53.953561820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:42:53.953958 env[1307]: time="2025-07-10T00:42:53.953644573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:42:53.953958 env[1307]: time="2025-07-10T00:42:53.953670655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:42:53.953958 env[1307]: time="2025-07-10T00:42:53.953850781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a78c66136c6ac367b30d848066c0c800fde37cacf055137562e14018cee1db28 pid=1841 runtime=io.containerd.runc.v2 Jul 10 00:42:54.051122 kubelet[1766]: E0710 00:42:54.051057 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="3.2s" Jul 10 00:42:54.158461 env[1307]: time="2025-07-10T00:42:54.158019870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"a78c66136c6ac367b30d848066c0c800fde37cacf055137562e14018cee1db28\"" Jul 10 00:42:54.159601 env[1307]: time="2025-07-10T00:42:54.159563712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b4d0f16301652d6e382865b4a9088f24ae55598e5b102a293f41373e8194e1\"" Jul 10 00:42:54.160422 kubelet[1766]: E0710 00:42:54.160397 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:54.160704 kubelet[1766]: E0710 00:42:54.160455 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:54.163113 env[1307]: time="2025-07-10T00:42:54.163079850Z" level=info msg="CreateContainer within sandbox \"e4b4d0f16301652d6e382865b4a9088f24ae55598e5b102a293f41373e8194e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:42:54.163336 env[1307]: time="2025-07-10T00:42:54.163288371Z" level=info msg="CreateContainer within sandbox \"a78c66136c6ac367b30d848066c0c800fde37cacf055137562e14018cee1db28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:42:54.207823 env[1307]: time="2025-07-10T00:42:54.207648073Z" level=info msg="CreateContainer within sandbox \"e4b4d0f16301652d6e382865b4a9088f24ae55598e5b102a293f41373e8194e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f26941d8e83f920388724f7d27eef0bc890c7ffa437db11e01cf62232df7cb21\"" Jul 10 00:42:54.208400 env[1307]: time="2025-07-10T00:42:54.208365938Z" level=info msg="StartContainer for \"f26941d8e83f920388724f7d27eef0bc890c7ffa437db11e01cf62232df7cb21\"" Jul 10 00:42:54.209927 env[1307]: time="2025-07-10T00:42:54.209872617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa6d811a510c118fb18eee19f419e4f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e2dc3f1dfbbd50ec448255d6aaaaee98cb85041d456b2bc4310cdda96a1659c\"" Jul 10 00:42:54.210406 kubelet[1766]: E0710 00:42:54.210377 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:54.211099 kubelet[1766]: W0710 00:42:54.211046 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:54.211166 kubelet[1766]: E0710 00:42:54.211119 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:54.212560 env[1307]: time="2025-07-10T00:42:54.212524421Z" level=info msg="CreateContainer within sandbox \"9e2dc3f1dfbbd50ec448255d6aaaaee98cb85041d456b2bc4310cdda96a1659c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:42:54.213220 env[1307]: time="2025-07-10T00:42:54.213186255Z" level=info msg="CreateContainer within sandbox \"a78c66136c6ac367b30d848066c0c800fde37cacf055137562e14018cee1db28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7307771b1db4b78f158627aba59138cd9dfa05cc37dbfd8b76f46a5a963a914c\"" Jul 10 00:42:54.213913 env[1307]: time="2025-07-10T00:42:54.213878119Z" level=info msg="StartContainer for \"7307771b1db4b78f158627aba59138cd9dfa05cc37dbfd8b76f46a5a963a914c\"" Jul 10 00:42:54.244513 env[1307]: time="2025-07-10T00:42:54.244424595Z" level=info msg="CreateContainer within sandbox \"9e2dc3f1dfbbd50ec448255d6aaaaee98cb85041d456b2bc4310cdda96a1659c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e85d9b182de4b882638aeba175a3c4985cfffba9edc679138c051d32f2effe0\"" Jul 10 00:42:54.245301 env[1307]: time="2025-07-10T00:42:54.245240563Z" level=info msg="StartContainer for \"3e85d9b182de4b882638aeba175a3c4985cfffba9edc679138c051d32f2effe0\"" Jul 10 00:42:54.348490 env[1307]: time="2025-07-10T00:42:54.348416954Z" level=info msg="StartContainer for \"7307771b1db4b78f158627aba59138cd9dfa05cc37dbfd8b76f46a5a963a914c\" returns successfully" Jul 10 00:42:54.350426 env[1307]: time="2025-07-10T00:42:54.350390372Z" level=info msg="StartContainer for \"f26941d8e83f920388724f7d27eef0bc890c7ffa437db11e01cf62232df7cb21\" returns successfully" Jul 10 00:42:54.414159 kubelet[1766]: W0710 00:42:54.413988 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 10 00:42:54.414159 kubelet[1766]: E0710 00:42:54.414050 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:42:54.449699 env[1307]: time="2025-07-10T00:42:54.449627902Z" level=info msg="StartContainer for \"3e85d9b182de4b882638aeba175a3c4985cfffba9edc679138c051d32f2effe0\" returns successfully" Jul 10 00:42:54.698840 kubelet[1766]: I0710 00:42:54.685749 1766 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:55.080149 kubelet[1766]: E0710 00:42:55.079956 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:55.082098 kubelet[1766]: E0710 00:42:55.082055 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:55.084467 kubelet[1766]: E0710 00:42:55.084439 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:56.095681 kubelet[1766]: E0710 00:42:56.094055 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:56.095681 kubelet[1766]: E0710 00:42:56.094518 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:56.095681 kubelet[1766]: E0710 00:42:56.094608 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:56.212684 kubelet[1766]: I0710 00:42:56.212612 1766 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:42:56.212684 kubelet[1766]: E0710 00:42:56.212688 1766 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:42:56.286034 kubelet[1766]: E0710 00:42:56.285963 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.387298 kubelet[1766]: E0710 00:42:56.387142 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.487638 kubelet[1766]: E0710 00:42:56.487583 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.615079 kubelet[1766]: E0710 00:42:56.615012 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.715768 kubelet[1766]: E0710 00:42:56.715708 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.816432 kubelet[1766]: E0710 00:42:56.816338 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:56.917208 kubelet[1766]: E0710 00:42:56.917136 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.018194 kubelet[1766]: E0710 00:42:57.018058 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.087671 kubelet[1766]: E0710 00:42:57.087622 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:57.119063 kubelet[1766]: E0710 00:42:57.119001 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.219807 kubelet[1766]: E0710 00:42:57.219747 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.320236 kubelet[1766]: E0710 00:42:57.320078 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.420699 kubelet[1766]: E0710 00:42:57.420638 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.521349 kubelet[1766]: E0710 00:42:57.521308 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.622492 kubelet[1766]: E0710 00:42:57.622364 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.723467 kubelet[1766]: E0710 00:42:57.723445 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.824067 kubelet[1766]: E0710 00:42:57.823994 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:57.924534 kubelet[1766]: E0710 00:42:57.924494 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.025122 kubelet[1766]: E0710 00:42:58.025091 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.126039 kubelet[1766]: E0710 00:42:58.126005 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.226699 kubelet[1766]: E0710 00:42:58.226581 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.327324 kubelet[1766]: E0710 00:42:58.327249 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.427849 kubelet[1766]: E0710 00:42:58.427810 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.528589 kubelet[1766]: E0710 00:42:58.528511 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.629609 kubelet[1766]: E0710 00:42:58.629561 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.730266 kubelet[1766]: E0710 00:42:58.730187 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.830945 kubelet[1766]: E0710 00:42:58.830844 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.840021 systemd[1]: Reloading. Jul 10 00:42:58.909882 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-07-10T00:42:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:42:58.910333 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-07-10T00:42:58Z" level=info msg="torcx already run" Jul 10 00:42:58.931977 kubelet[1766]: E0710 00:42:58.931928 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:58.980514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:42:58.980532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:42:58.998956 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:42:59.032670 kubelet[1766]: E0710 00:42:59.032610 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:59.082997 systemd[1]: Stopping kubelet.service... Jul 10 00:42:59.107181 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:42:59.107586 systemd[1]: Stopped kubelet.service. Jul 10 00:42:59.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:59.108707 kernel: kauditd_printk_skb: 45 callbacks suppressed Jul 10 00:42:59.108766 kernel: audit: type=1131 audit(1752108179.106:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:59.109844 systemd[1]: Starting kubelet.service... Jul 10 00:42:59.206402 systemd[1]: Started kubelet.service. Jul 10 00:42:59.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:59.210702 kernel: audit: type=1130 audit(1752108179.206:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:42:59.272847 kubelet[2121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:42:59.272847 kubelet[2121]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:42:59.272847 kubelet[2121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:42:59.273325 kubelet[2121]: I0710 00:42:59.272875 2121 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:42:59.279175 kubelet[2121]: I0710 00:42:59.279119 2121 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:42:59.279175 kubelet[2121]: I0710 00:42:59.279151 2121 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:42:59.279412 kubelet[2121]: I0710 00:42:59.279390 2121 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:42:59.280727 kubelet[2121]: I0710 00:42:59.280708 2121 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:42:59.282664 kubelet[2121]: I0710 00:42:59.282520 2121 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:42:59.291102 kubelet[2121]: E0710 00:42:59.291061 2121 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:42:59.291102 kubelet[2121]: I0710 00:42:59.291095 2121 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:42:59.295355 kubelet[2121]: I0710 00:42:59.295318 2121 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:42:59.295805 kubelet[2121]: I0710 00:42:59.295791 2121 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:42:59.295945 kubelet[2121]: I0710 00:42:59.295915 2121 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:42:59.296114 kubelet[2121]: I0710 00:42:59.295948 2121 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:42:59.296197 kubelet[2121]: I0710 00:42:59.296125 2121 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:42:59.296197 kubelet[2121]: I0710 00:42:59.296132 2121 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:42:59.296197 kubelet[2121]: I0710 00:42:59.296155 2121 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:42:59.296262 kubelet[2121]: I0710 00:42:59.296253 2121 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:42:59.296284 kubelet[2121]: I0710 00:42:59.296264 2121 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:42:59.296327 kubelet[2121]: I0710 00:42:59.296318 2121 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:42:59.296380 kubelet[2121]: I0710 00:42:59.296347 2121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:42:59.297040 kubelet[2121]: I0710 00:42:59.297017 2121 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:42:59.297391 kubelet[2121]: I0710 00:42:59.297346 2121 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:42:59.297761 kubelet[2121]: I0710 00:42:59.297743 2121 server.go:1274] "Started kubelet" Jul 10 00:42:59.298000 audit[2121]: AVC avc: denied { mac_admin } for pid=2121 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:59.305925 kubelet[2121]: I0710 00:42:59.299616 2121 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 00:42:59.305925 kubelet[2121]: I0710 00:42:59.299648 2121 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 00:42:59.305925 kubelet[2121]: I0710 00:42:59.299692 2121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:42:59.307936 kubelet[2121]: I0710 00:42:59.307865 2121 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:42:59.308645 kubelet[2121]: I0710 00:42:59.308583 2121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:42:59.308906 kubelet[2121]: I0710 00:42:59.308880 2121 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:42:59.309117 kubelet[2121]: I0710 00:42:59.309092 2121 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:42:59.309739 kubelet[2121]: I0710 00:42:59.309665 2121 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:42:59.309795 kubelet[2121]: I0710 00:42:59.309757 2121 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:42:59.309866 kubelet[2121]: I0710 00:42:59.309834 2121 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:42:59.310175 kubelet[2121]: E0710 00:42:59.310151 2121 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:42:59.310681 kubelet[2121]: E0710 00:42:59.310551 2121 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:42:59.311253 kubelet[2121]: I0710 00:42:59.311233 2121 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:42:59.311469 kubelet[2121]: I0710 00:42:59.311434 2121 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:42:59.311744 kubelet[2121]: I0710 00:42:59.311719 2121 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:42:59.313262 kubelet[2121]: I0710 00:42:59.313249 2121 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:42:59.318230 kubelet[2121]: I0710 00:42:59.318189 2121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:42:59.319138 kubelet[2121]: I0710 00:42:59.319117 2121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:42:59.319138 kubelet[2121]: I0710 00:42:59.319136 2121 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:42:59.319216 kubelet[2121]: I0710 00:42:59.319159 2121 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:42:59.319216 kubelet[2121]: E0710 00:42:59.319199 2121 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:42:59.328961 kernel: audit: type=1400 audit(1752108179.298:215): avc: denied { mac_admin } for pid=2121 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:59.329072 kernel: audit: type=1401 audit(1752108179.298:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:59.298000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:59.298000 audit[2121]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005ccfc0 a1=c00025fd10 a2=c0005ccf90 a3=25 items=0 ppid=1 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:59.333957 kernel: audit: type=1300 audit(1752108179.298:215): arch=c000003e syscall=188 success=no exit=-22 a0=c0005ccfc0 a1=c00025fd10 a2=c0005ccf90 a3=25 items=0 ppid=1 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:59.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:59.339837 kernel: audit: type=1327 audit(1752108179.298:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:59.339907 kernel: audit: type=1400 audit(1752108179.298:216): avc: denied { mac_admin } for pid=2121 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:59.298000 audit[2121]: AVC avc: denied { mac_admin } for pid=2121 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:59.345880 kernel: audit: type=1401 audit(1752108179.298:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:59.298000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:59.298000 audit[2121]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005ac8e0 a1=c00025fd28 a2=c0005cd050 a3=25 items=0 ppid=1 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:59.350969 kernel: audit: type=1300 audit(1752108179.298:216): arch=c000003e syscall=188 success=no exit=-22 a0=c0005ac8e0 a1=c00025fd28 a2=c0005cd050 a3=25 items=0 ppid=1 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:59.351003 kernel: audit: type=1327 audit(1752108179.298:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:59.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:59.375499 kubelet[2121]: I0710 00:42:59.375464 2121 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:42:59.375978 kubelet[2121]: I0710 00:42:59.375932 2121 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:42:59.376144 kubelet[2121]: I0710 00:42:59.376102 2121 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:42:59.376500 kubelet[2121]: I0710 00:42:59.376482 2121 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:42:59.376624 kubelet[2121]: I0710 00:42:59.376588 2121 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:42:59.376729 kubelet[2121]: I0710 00:42:59.376713 2121 policy_none.go:49] "None policy: Start" Jul 10 00:42:59.377979 kubelet[2121]: I0710 00:42:59.377958 2121 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:42:59.378078 kubelet[2121]: I0710 00:42:59.378062 2121 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:42:59.378304 kubelet[2121]: I0710 00:42:59.378291 2121 state_mem.go:75] "Updated machine memory state" Jul 10 00:42:59.379746 kubelet[2121]: I0710 00:42:59.379625 2121 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:42:59.379000 audit[2121]: AVC avc: denied { mac_admin } for pid=2121 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:42:59.379000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:42:59.379000 audit[2121]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00132d950 a1=c000c83938 a2=c00132d920 a3=25 items=0 ppid=1 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:42:59.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:42:59.380759 kubelet[2121]: I0710 00:42:59.380353 2121 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 00:42:59.380759 kubelet[2121]: I0710 00:42:59.380557 2121 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:42:59.380759 kubelet[2121]: I0710 00:42:59.380569 2121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:42:59.380881 kubelet[2121]: I0710 00:42:59.380836 2121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:42:59.485110 kubelet[2121]: I0710 00:42:59.485061 2121 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:42:59.611541 kubelet[2121]: I0710 00:42:59.611365 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:59.611541 kubelet[2121]: I0710 00:42:59.611441 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:59.611541 kubelet[2121]: I0710 00:42:59.611473 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:59.611541 kubelet[2121]: I0710 00:42:59.611495 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:59.611541 kubelet[2121]: I0710 00:42:59.611516 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:59.611866 kubelet[2121]: I0710 00:42:59.611534 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:42:59.611866 kubelet[2121]: I0710 00:42:59.611550 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa6d811a510c118fb18eee19f419e4f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa6d811a510c118fb18eee19f419e4f7\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:42:59.611866 kubelet[2121]: I0710 00:42:59.611573 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:59.611866 kubelet[2121]: I0710 00:42:59.611590 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:42:59.701285 kubelet[2121]: I0710 00:42:59.701238 2121 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:42:59.701484 kubelet[2121]: I0710 00:42:59.701330 2121 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:42:59.731160 kubelet[2121]: E0710 00:42:59.731066 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:59.732122 kubelet[2121]: E0710 00:42:59.732100 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:59.732197 kubelet[2121]: E0710 00:42:59.732170 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:00.297679 kubelet[2121]: I0710 00:43:00.297628 2121 apiserver.go:52] "Watching apiserver" Jul 10 00:43:00.309874 kubelet[2121]: I0710 00:43:00.309841 2121 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:43:00.332800 kubelet[2121]: E0710 00:43:00.332234 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:00.332800 kubelet[2121]: E0710 00:43:00.332703 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:00.336469 kubelet[2121]: E0710 00:43:00.336435 2121 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:43:00.336777 kubelet[2121]: E0710 00:43:00.336759 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:00.364590 kubelet[2121]: I0710 00:43:00.364331 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.364305711 podStartE2EDuration="1.364305711s" podCreationTimestamp="2025-07-10 00:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:00.355898533 +0000 UTC m=+1.146278387" watchObservedRunningTime="2025-07-10 00:43:00.364305711 +0000 UTC m=+1.154685565" Jul 10 00:43:00.373100 kubelet[2121]: I0710 00:43:00.373020 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.372868683 podStartE2EDuration="1.372868683s" podCreationTimestamp="2025-07-10 00:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:00.365014589 +0000 UTC m=+1.155394463" watchObservedRunningTime="2025-07-10 00:43:00.372868683 +0000 UTC m=+1.163248547" Jul 10 00:43:00.382290 kubelet[2121]: I0710 00:43:00.382106 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.382083451 podStartE2EDuration="1.382083451s" podCreationTimestamp="2025-07-10 00:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:00.373468189 +0000 UTC m=+1.163848053" watchObservedRunningTime="2025-07-10 00:43:00.382083451 +0000 UTC m=+1.172463315" Jul 10 00:43:01.036701 update_engine[1293]: I0710 00:43:01.036619 1293 update_attempter.cc:509] Updating boot flags... Jul 10 00:43:01.332790 kubelet[2121]: E0710 00:43:01.332670 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:03.280067 kubelet[2121]: E0710 00:43:03.280025 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:03.453191 kubelet[2121]: I0710 00:43:03.453155 2121 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:43:03.453518 env[1307]: time="2025-07-10T00:43:03.453480252Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:43:03.453895 kubelet[2121]: I0710 00:43:03.453676 2121 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:43:04.561457 kubelet[2121]: I0710 00:43:04.561380 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8f26d0d-1cbb-4803-8712-509c6ae07d23-kube-proxy\") pod \"kube-proxy-tg5lz\" (UID: \"b8f26d0d-1cbb-4803-8712-509c6ae07d23\") " pod="kube-system/kube-proxy-tg5lz" Jul 10 00:43:04.561457 kubelet[2121]: I0710 00:43:04.561433 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb80f4bf-42a1-4d33-8c4d-aaa173337f03-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-hc6zh\" (UID: \"bb80f4bf-42a1-4d33-8c4d-aaa173337f03\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hc6zh" Jul 10 00:43:04.561457 kubelet[2121]: I0710 00:43:04.561457 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvsrf\" (UniqueName: \"kubernetes.io/projected/bb80f4bf-42a1-4d33-8c4d-aaa173337f03-kube-api-access-dvsrf\") pod \"tigera-operator-5bf8dfcb4-hc6zh\" (UID: \"bb80f4bf-42a1-4d33-8c4d-aaa173337f03\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hc6zh" Jul 10 00:43:04.561457 kubelet[2121]: I0710 00:43:04.561474 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8f26d0d-1cbb-4803-8712-509c6ae07d23-xtables-lock\") pod \"kube-proxy-tg5lz\" (UID: \"b8f26d0d-1cbb-4803-8712-509c6ae07d23\") " pod="kube-system/kube-proxy-tg5lz" Jul 10 00:43:04.562020 kubelet[2121]: I0710 00:43:04.561490 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8f26d0d-1cbb-4803-8712-509c6ae07d23-lib-modules\") pod \"kube-proxy-tg5lz\" (UID: \"b8f26d0d-1cbb-4803-8712-509c6ae07d23\") " pod="kube-system/kube-proxy-tg5lz" Jul 10 00:43:04.562020 kubelet[2121]: I0710 00:43:04.561559 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8blf\" (UniqueName: \"kubernetes.io/projected/b8f26d0d-1cbb-4803-8712-509c6ae07d23-kube-api-access-b8blf\") pod \"kube-proxy-tg5lz\" (UID: \"b8f26d0d-1cbb-4803-8712-509c6ae07d23\") " pod="kube-system/kube-proxy-tg5lz" Jul 10 00:43:04.667810 kubelet[2121]: I0710 00:43:04.667768 2121 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:43:04.694734 kubelet[2121]: E0710 00:43:04.694695 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:04.695235 env[1307]: time="2025-07-10T00:43:04.695196536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tg5lz,Uid:b8f26d0d-1cbb-4803-8712-509c6ae07d23,Namespace:kube-system,Attempt:0,}" Jul 10 00:43:04.717962 env[1307]: time="2025-07-10T00:43:04.717858322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:04.717962 env[1307]: time="2025-07-10T00:43:04.717898570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:04.717962 env[1307]: time="2025-07-10T00:43:04.717908419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:04.718205 env[1307]: time="2025-07-10T00:43:04.718066064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e738f23384b02aae51b4ad802506cc9a1da253c598a94808b36895f10171311 pid=2194 runtime=io.containerd.runc.v2 Jul 10 00:43:04.800379 env[1307]: time="2025-07-10T00:43:04.800314704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hc6zh,Uid:bb80f4bf-42a1-4d33-8c4d-aaa173337f03,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:43:04.831301 env[1307]: time="2025-07-10T00:43:04.831113293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:04.831301 env[1307]: time="2025-07-10T00:43:04.831156306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:04.831301 env[1307]: time="2025-07-10T00:43:04.831194639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:04.831675 env[1307]: time="2025-07-10T00:43:04.831600333Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e04c25413183869826d069304c38f8a330c465ab7760a28b50e55e51ed135525 pid=2223 runtime=io.containerd.runc.v2 Jul 10 00:43:04.938531 env[1307]: time="2025-07-10T00:43:04.938467040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tg5lz,Uid:b8f26d0d-1cbb-4803-8712-509c6ae07d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e738f23384b02aae51b4ad802506cc9a1da253c598a94808b36895f10171311\"" Jul 10 00:43:04.939312 kubelet[2121]: E0710 00:43:04.939281 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:04.941112 env[1307]: time="2025-07-10T00:43:04.941062024Z" level=info msg="CreateContainer within sandbox \"0e738f23384b02aae51b4ad802506cc9a1da253c598a94808b36895f10171311\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:43:05.002238 env[1307]: time="2025-07-10T00:43:05.002187258Z" level=info msg="CreateContainer within sandbox \"0e738f23384b02aae51b4ad802506cc9a1da253c598a94808b36895f10171311\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a8ba60259a60b9dfd9f4fb94e3c57bf3bd1c7350cc5150b308c6551c7879d81\"" Jul 10 00:43:05.002830 env[1307]: time="2025-07-10T00:43:05.002785242Z" level=info msg="StartContainer for \"9a8ba60259a60b9dfd9f4fb94e3c57bf3bd1c7350cc5150b308c6551c7879d81\"" Jul 10 00:43:05.047306 env[1307]: time="2025-07-10T00:43:05.047255650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hc6zh,Uid:bb80f4bf-42a1-4d33-8c4d-aaa173337f03,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e04c25413183869826d069304c38f8a330c465ab7760a28b50e55e51ed135525\"" Jul 10 00:43:05.050522 env[1307]: time="2025-07-10T00:43:05.050488177Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:43:05.110608 env[1307]: time="2025-07-10T00:43:05.110189174Z" level=info msg="StartContainer for \"9a8ba60259a60b9dfd9f4fb94e3c57bf3bd1c7350cc5150b308c6551c7879d81\" returns successfully" Jul 10 00:43:05.232702 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 10 00:43:05.232831 kernel: audit: type=1325 audit(1752108185.227:218): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.232875 kernel: audit: type=1300 audit(1752108185.227:218): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8a2fdac0 a2=0 a3=7ffe8a2fdaac items=0 ppid=2287 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.227000 audit[2338]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.227000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8a2fdac0 a2=0 a3=7ffe8a2fdaac items=0 ppid=2287 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:43:05.239701 kernel: audit: type=1327 audit(1752108185.227:218): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:43:05.227000 audit[2337]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.241866 kernel: audit: type=1325 audit(1752108185.227:219): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.241909 kernel: audit: type=1300 audit(1752108185.227:219): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde939bdd0 a2=0 a3=7ffde939bdbc items=0 ppid=2287 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.227000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde939bdd0 a2=0 a3=7ffde939bdbc items=0 ppid=2287 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.246446 kernel: audit: type=1327 audit(1752108185.227:219): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:43:05.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:43:05.248688 kernel: audit: type=1325 audit(1752108185.229:220): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.229000 audit[2339]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.229000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff43b92e0 a2=0 a3=7ffff43b92cc items=0 ppid=2287 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.255539 kernel: audit: type=1300 audit(1752108185.229:220): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff43b92e0 a2=0 a3=7ffff43b92cc items=0 ppid=2287 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.255603 kernel: audit: type=1327 audit(1752108185.229:220): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:43:05.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:43:05.232000 audit[2340]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.259966 kernel: audit: type=1325 audit(1752108185.232:221): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.232000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed59c5f90 a2=0 a3=7ffed59c5f7c items=0 ppid=2287 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 00:43:05.232000 audit[2341]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.232000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea7eb2ee0 a2=0 a3=7ffea7eb2ecc items=0 ppid=2287 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:43:05.233000 audit[2342]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.233000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce55302f0 a2=0 a3=7ffce55302dc items=0 ppid=2287 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 00:43:05.330000 audit[2343]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.330000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff26346a50 a2=0 a3=7fff26346a3c items=0 ppid=2287 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 00:43:05.333000 audit[2345]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.333000 audit[2345]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde923cc50 a2=0 a3=7ffde923cc3c items=0 ppid=2287 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 10 00:43:05.337000 audit[2348]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.337000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffca6d20750 a2=0 a3=7ffca6d2073c items=0 ppid=2287 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 10 00:43:05.338000 audit[2349]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.338000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe09faa6d0 a2=0 a3=7ffe09faa6bc items=0 ppid=2287 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 00:43:05.341797 kubelet[2121]: E0710 00:43:05.341770 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:05.341000 audit[2351]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.341000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0871d160 a2=0 a3=7ffd0871d14c items=0 ppid=2287 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 00:43:05.342000 audit[2352]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.342000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff96f23840 a2=0 a3=7fff96f2382c items=0 ppid=2287 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 00:43:05.345000 audit[2354]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.345000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc837e2000 a2=0 a3=7ffc837e1fec items=0 ppid=2287 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 00:43:05.349000 audit[2357]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.349000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb3a7dbc0 a2=0 a3=7fffb3a7dbac items=0 ppid=2287 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 10 00:43:05.351000 audit[2358]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.351000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e707870 a2=0 a3=7fff2e70785c items=0 ppid=2287 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 00:43:05.354000 audit[2360]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.354000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8a57c160 a2=0 a3=7ffe8a57c14c items=0 ppid=2287 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 00:43:05.355000 audit[2361]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.355000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2c5c6b90 a2=0 a3=7fff2c5c6b7c items=0 ppid=2287 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 00:43:05.357000 audit[2363]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.357000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf23594a0 a2=0 a3=7ffdf235948c items=0 ppid=2287 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:43:05.361000 audit[2366]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.361000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8c76a4b0 a2=0 a3=7ffc8c76a49c items=0 ppid=2287 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:43:05.364000 audit[2369]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.364000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff71c15fb0 a2=0 a3=7fff71c15f9c items=0 ppid=2287 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 00:43:05.365000 audit[2370]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.365000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9eb9c480 a2=0 a3=7ffd9eb9c46c items=0 ppid=2287 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 00:43:05.367000 audit[2372]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.367000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd7a87aab0 a2=0 a3=7ffd7a87aa9c items=0 ppid=2287 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:43:05.370000 audit[2375]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.370000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf3039700 a2=0 a3=7ffdf30396ec items=0 ppid=2287 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:43:05.371000 audit[2376]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.371000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf6b19f50 a2=0 a3=7ffcf6b19f3c items=0 ppid=2287 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 00:43:05.373000 audit[2378]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:43:05.373000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcfdf089b0 a2=0 a3=7ffcfdf0899c items=0 ppid=2287 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 00:43:05.397000 audit[2384]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:05.397000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff891c9b10 a2=0 a3=7fff891c9afc items=0 ppid=2287 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:05.407000 audit[2384]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:05.407000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff891c9b10 a2=0 a3=7fff891c9afc items=0 ppid=2287 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:05.408000 audit[2389]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.408000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe1e8982a0 a2=0 a3=7ffe1e89828c items=0 ppid=2287 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 00:43:05.411000 audit[2391]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.411000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4add0a00 a2=0 a3=7fff4add09ec items=0 ppid=2287 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 10 00:43:05.415000 audit[2394]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.415000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe88f91a40 a2=0 a3=7ffe88f91a2c items=0 ppid=2287 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 10 00:43:05.418072 kubelet[2121]: E0710 00:43:05.418035 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:05.416000 audit[2395]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.416000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6cb437d0 a2=0 a3=7fff6cb437bc items=0 ppid=2287 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 00:43:05.420000 audit[2397]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.420000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0fea4470 a2=0 a3=7ffd0fea445c items=0 ppid=2287 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 00:43:05.421000 audit[2398]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.421000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5fb0a4e0 a2=0 a3=7fff5fb0a4cc items=0 ppid=2287 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 00:43:05.423000 audit[2400]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.423000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff75a51790 a2=0 a3=7fff75a5177c items=0 ppid=2287 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 10 00:43:05.427000 audit[2403]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.427000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffec5ee0fc0 a2=0 a3=7ffec5ee0fac items=0 ppid=2287 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 00:43:05.427000 audit[2404]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.427000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde6231370 a2=0 a3=7ffde623135c items=0 ppid=2287 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 00:43:05.433000 audit[2406]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.433000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9678c6e0 a2=0 a3=7ffe9678c6cc items=0 ppid=2287 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 00:43:05.434000 audit[2407]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.434000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe36a14ea0 a2=0 a3=7ffe36a14e8c items=0 ppid=2287 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 00:43:05.436000 audit[2409]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.436000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed19cbc30 a2=0 a3=7ffed19cbc1c items=0 ppid=2287 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:43:05.440000 audit[2412]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.440000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffbfccebc0 a2=0 a3=7fffbfccebac items=0 ppid=2287 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 00:43:05.443000 audit[2415]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.443000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb77882c0 a2=0 a3=7ffdb77882ac items=0 ppid=2287 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 10 00:43:05.444000 audit[2416]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.444000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc164ad030 a2=0 a3=7ffc164ad01c items=0 ppid=2287 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 00:43:05.446000 audit[2418]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.446000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd21ef3840 a2=0 a3=7ffd21ef382c items=0 ppid=2287 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:43:05.448000 audit[2421]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.448000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff9fd8da40 a2=0 a3=7fff9fd8da2c items=0 ppid=2287 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:43:05.449000 audit[2422]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.449000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3c940130 a2=0 a3=7ffe3c94011c items=0 ppid=2287 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 00:43:05.451000 audit[2424]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.451000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe7f06b310 a2=0 a3=7ffe7f06b2fc items=0 ppid=2287 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 00:43:05.452000 audit[2425]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.452000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc380a96e0 a2=0 a3=7ffc380a96cc items=0 ppid=2287 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 00:43:05.454000 audit[2427]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.454000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5a54cef0 a2=0 a3=7ffc5a54cedc items=0 ppid=2287 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:43:05.458000 audit[2430]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:43:05.458000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3c9e2820 a2=0 a3=7ffc3c9e280c items=0 ppid=2287 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:43:05.460000 audit[2432]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 00:43:05.460000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc93413620 a2=0 a3=7ffc9341360c items=0 ppid=2287 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.460000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:05.461000 audit[2432]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 00:43:05.461000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc93413620 a2=0 a3=7ffc9341360c items=0 ppid=2287 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:05.461000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:05.497670 kubelet[2121]: I0710 00:43:05.497575 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tg5lz" podStartSLOduration=1.497557203 podStartE2EDuration="1.497557203s" podCreationTimestamp="2025-07-10 00:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:05.352749298 +0000 UTC m=+6.143129162" watchObservedRunningTime="2025-07-10 00:43:05.497557203 +0000 UTC m=+6.287937067" Jul 10 00:43:06.346354 kubelet[2121]: E0710 00:43:06.346319 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:06.610433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723508026.mount: Deactivated successfully. Jul 10 00:43:07.415850 env[1307]: time="2025-07-10T00:43:07.415770214Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:07.417724 env[1307]: time="2025-07-10T00:43:07.417691783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:07.419522 env[1307]: time="2025-07-10T00:43:07.419466829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:07.420926 env[1307]: time="2025-07-10T00:43:07.420892744Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:07.421869 env[1307]: time="2025-07-10T00:43:07.421833825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:43:07.425011 env[1307]: time="2025-07-10T00:43:07.424950584Z" level=info msg="CreateContainer within sandbox \"e04c25413183869826d069304c38f8a330c465ab7760a28b50e55e51ed135525\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:43:07.437818 env[1307]: time="2025-07-10T00:43:07.437780098Z" level=info msg="CreateContainer within sandbox \"e04c25413183869826d069304c38f8a330c465ab7760a28b50e55e51ed135525\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d419ced9fdc2b6c7b1542f95d7dc1ea5d01fcbb04280053ad3f6263d93a624c\"" Jul 10 00:43:07.438210 env[1307]: time="2025-07-10T00:43:07.438179467Z" level=info msg="StartContainer for \"8d419ced9fdc2b6c7b1542f95d7dc1ea5d01fcbb04280053ad3f6263d93a624c\"" Jul 10 00:43:08.010283 kubelet[2121]: E0710 00:43:08.010241 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:08.328849 env[1307]: time="2025-07-10T00:43:08.328686950Z" level=info msg="StartContainer for \"8d419ced9fdc2b6c7b1542f95d7dc1ea5d01fcbb04280053ad3f6263d93a624c\" returns successfully" Jul 10 00:43:08.350803 kubelet[2121]: E0710 00:43:08.350718 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:13.288090 kubelet[2121]: E0710 00:43:13.288037 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:13.311217 sudo[1465]: pam_unix(sudo:session): session closed for user root Jul 10 00:43:13.310000 audit[1465]: USER_END pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.312083 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 10 00:43:13.312139 kernel: audit: type=1106 audit(1752108193.310:269): pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.315000 audit[1465]: CRED_DISP pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.319672 kernel: audit: type=1104 audit(1752108193.315:270): pid=1465 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.337672 sshd[1460]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:13.338000 audit[1460]: USER_END pid=1460 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:13.340504 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:53526.service: Deactivated successfully. Jul 10 00:43:13.341684 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:43:13.342179 systemd-logind[1287]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:43:13.342966 systemd-logind[1287]: Removed session 7. Jul 10 00:43:13.345680 kernel: audit: type=1106 audit(1752108193.338:271): pid=1460 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:13.338000 audit[1460]: CRED_DISP pid=1460 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:13.350327 kubelet[2121]: I0710 00:43:13.350285 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-hc6zh" podStartSLOduration=6.976221045 podStartE2EDuration="9.350265777s" podCreationTimestamp="2025-07-10 00:43:04 +0000 UTC" firstStartedPulling="2025-07-10 00:43:05.048696981 +0000 UTC m=+5.839076835" lastFinishedPulling="2025-07-10 00:43:07.422741703 +0000 UTC m=+8.213121567" observedRunningTime="2025-07-10 00:43:08.39596493 +0000 UTC m=+9.186344784" watchObservedRunningTime="2025-07-10 00:43:13.350265777 +0000 UTC m=+14.140645641" Jul 10 00:43:13.350671 kernel: audit: type=1104 audit(1752108193.338:272): pid=1460 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:13.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.99:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.356669 kernel: audit: type=1131 audit(1752108193.340:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.99:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:13.863000 audit[2523]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.872163 kernel: audit: type=1325 audit(1752108193.863:274): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.872218 kernel: audit: type=1300 audit(1752108193.863:274): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffcc3f21b0 a2=0 a3=7fffcc3f219c items=0 ppid=2287 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.863000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffcc3f21b0 a2=0 a3=7fffcc3f219c items=0 ppid=2287 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:13.876675 kernel: audit: type=1327 audit(1752108193.863:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:13.876000 audit[2523]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.880678 kernel: audit: type=1325 audit(1752108193.876:275): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.876000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcc3f21b0 a2=0 a3=0 items=0 ppid=2287 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.886680 kernel: audit: type=1300 audit(1752108193.876:275): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcc3f21b0 a2=0 a3=0 items=0 ppid=2287 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:13.891000 audit[2525]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.891000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe52adf960 a2=0 a3=7ffe52adf94c items=0 ppid=2287 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:13.897000 audit[2525]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:13.897000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe52adf960 a2=0 a3=0 items=0 ppid=2287 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:13.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:15.381000 audit[2527]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:15.381000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc876bc7b0 a2=0 a3=7ffc876bc79c items=0 ppid=2287 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:15.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:15.387000 audit[2527]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:15.387000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc876bc7b0 a2=0 a3=0 items=0 ppid=2287 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:15.387000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:15.763554 kubelet[2121]: I0710 00:43:15.763481 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/99c2ea65-34f8-4842-8ad2-febe4b06acd4-typha-certs\") pod \"calico-typha-5c9d5b547-582gb\" (UID: \"99c2ea65-34f8-4842-8ad2-febe4b06acd4\") " pod="calico-system/calico-typha-5c9d5b547-582gb" Jul 10 00:43:15.763554 kubelet[2121]: I0710 00:43:15.763533 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99c2ea65-34f8-4842-8ad2-febe4b06acd4-tigera-ca-bundle\") pod \"calico-typha-5c9d5b547-582gb\" (UID: \"99c2ea65-34f8-4842-8ad2-febe4b06acd4\") " pod="calico-system/calico-typha-5c9d5b547-582gb" Jul 10 00:43:15.763554 kubelet[2121]: I0710 00:43:15.763552 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tsw2\" (UniqueName: \"kubernetes.io/projected/99c2ea65-34f8-4842-8ad2-febe4b06acd4-kube-api-access-4tsw2\") pod \"calico-typha-5c9d5b547-582gb\" (UID: \"99c2ea65-34f8-4842-8ad2-febe4b06acd4\") " pod="calico-system/calico-typha-5c9d5b547-582gb" Jul 10 00:43:15.925589 kubelet[2121]: E0710 00:43:15.925544 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:15.927103 env[1307]: time="2025-07-10T00:43:15.927065213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9d5b547-582gb,Uid:99c2ea65-34f8-4842-8ad2-febe4b06acd4,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:15.948551 env[1307]: time="2025-07-10T00:43:15.948454464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:15.948551 env[1307]: time="2025-07-10T00:43:15.948506603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:15.948551 env[1307]: time="2025-07-10T00:43:15.948517284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:15.948835 env[1307]: time="2025-07-10T00:43:15.948765068Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3df9f656a15a0909d43f0d437a100a35779a7c1951bcc86f9ef56f4ee6afe90f pid=2537 runtime=io.containerd.runc.v2 Jul 10 00:43:16.013198 env[1307]: time="2025-07-10T00:43:16.013142885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9d5b547-582gb,Uid:99c2ea65-34f8-4842-8ad2-febe4b06acd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"3df9f656a15a0909d43f0d437a100a35779a7c1951bcc86f9ef56f4ee6afe90f\"" Jul 10 00:43:16.015118 kubelet[2121]: E0710 00:43:16.014790 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:16.018757 env[1307]: time="2025-07-10T00:43:16.018729453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:43:16.065413 kubelet[2121]: I0710 00:43:16.065357 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-var-lib-calico\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065413 kubelet[2121]: I0710 00:43:16.065396 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-cni-net-dir\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065413 kubelet[2121]: I0710 00:43:16.065412 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-flexvol-driver-host\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065627 kubelet[2121]: I0710 00:43:16.065466 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-lib-modules\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065627 kubelet[2121]: I0710 00:43:16.065489 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-tigera-ca-bundle\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065627 kubelet[2121]: I0710 00:43:16.065507 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-var-run-calico\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065627 kubelet[2121]: I0710 00:43:16.065525 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfph\" (UniqueName: \"kubernetes.io/projected/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-kube-api-access-jtfph\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065627 kubelet[2121]: I0710 00:43:16.065543 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-xtables-lock\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065768 kubelet[2121]: I0710 00:43:16.065562 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-cni-bin-dir\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065768 kubelet[2121]: I0710 00:43:16.065577 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-node-certs\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065768 kubelet[2121]: I0710 00:43:16.065597 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-policysync\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.065768 kubelet[2121]: I0710 00:43:16.065609 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc5d6c38-2712-4e33-9bde-3313ddf9f1ad-cni-log-dir\") pod \"calico-node-x4lvf\" (UID: \"dc5d6c38-2712-4e33-9bde-3313ddf9f1ad\") " pod="calico-system/calico-node-x4lvf" Jul 10 00:43:16.167672 kubelet[2121]: E0710 00:43:16.167443 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.167672 kubelet[2121]: W0710 00:43:16.167475 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.167672 kubelet[2121]: E0710 00:43:16.167506 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.169365 kubelet[2121]: E0710 00:43:16.169331 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.169365 kubelet[2121]: W0710 00:43:16.169357 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.169448 kubelet[2121]: E0710 00:43:16.169376 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.174257 kubelet[2121]: E0710 00:43:16.174232 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.174257 kubelet[2121]: W0710 00:43:16.174244 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.174257 kubelet[2121]: E0710 00:43:16.174254 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.266172 kubelet[2121]: E0710 00:43:16.266035 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:16.273750 kubelet[2121]: E0710 00:43:16.273710 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.273750 kubelet[2121]: W0710 00:43:16.273737 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.273876 kubelet[2121]: E0710 00:43:16.273764 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.274036 kubelet[2121]: E0710 00:43:16.274014 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.274036 kubelet[2121]: W0710 00:43:16.274025 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.274036 kubelet[2121]: E0710 00:43:16.274033 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.274307 kubelet[2121]: E0710 00:43:16.274267 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.274307 kubelet[2121]: W0710 00:43:16.274295 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.274493 kubelet[2121]: E0710 00:43:16.274328 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.274557 kubelet[2121]: E0710 00:43:16.274543 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.274557 kubelet[2121]: W0710 00:43:16.274554 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.274624 kubelet[2121]: E0710 00:43:16.274562 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.274743 kubelet[2121]: E0710 00:43:16.274729 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.274743 kubelet[2121]: W0710 00:43:16.274740 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.274822 kubelet[2121]: E0710 00:43:16.274747 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.274912 kubelet[2121]: E0710 00:43:16.274899 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.274912 kubelet[2121]: W0710 00:43:16.274909 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.274973 kubelet[2121]: E0710 00:43:16.274916 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.275065 kubelet[2121]: E0710 00:43:16.275050 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.275065 kubelet[2121]: W0710 00:43:16.275060 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.275142 kubelet[2121]: E0710 00:43:16.275067 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.275361 kubelet[2121]: E0710 00:43:16.275340 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.275361 kubelet[2121]: W0710 00:43:16.275356 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.275442 kubelet[2121]: E0710 00:43:16.275366 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.275670 kubelet[2121]: E0710 00:43:16.275631 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.275670 kubelet[2121]: W0710 00:43:16.275659 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.275670 kubelet[2121]: E0710 00:43:16.275668 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.276121 kubelet[2121]: E0710 00:43:16.276103 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.276121 kubelet[2121]: W0710 00:43:16.276115 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.276121 kubelet[2121]: E0710 00:43:16.276125 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.276463 kubelet[2121]: E0710 00:43:16.276315 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.276463 kubelet[2121]: W0710 00:43:16.276337 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.276463 kubelet[2121]: E0710 00:43:16.276361 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.276914 kubelet[2121]: E0710 00:43:16.276615 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.276914 kubelet[2121]: W0710 00:43:16.276627 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.276914 kubelet[2121]: E0710 00:43:16.276637 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.276914 kubelet[2121]: E0710 00:43:16.276838 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.276914 kubelet[2121]: W0710 00:43:16.276846 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.276914 kubelet[2121]: E0710 00:43:16.276854 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.277265 kubelet[2121]: E0710 00:43:16.277108 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.277265 kubelet[2121]: W0710 00:43:16.277119 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.277265 kubelet[2121]: E0710 00:43:16.277128 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.277356 kubelet[2121]: E0710 00:43:16.277326 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.277356 kubelet[2121]: W0710 00:43:16.277338 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.277407 kubelet[2121]: E0710 00:43:16.277354 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.277590 kubelet[2121]: E0710 00:43:16.277567 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.277590 kubelet[2121]: W0710 00:43:16.277584 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.277688 kubelet[2121]: E0710 00:43:16.277595 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.277834 kubelet[2121]: E0710 00:43:16.277815 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.277834 kubelet[2121]: W0710 00:43:16.277828 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.277913 kubelet[2121]: E0710 00:43:16.277840 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.278153 kubelet[2121]: E0710 00:43:16.278066 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.278153 kubelet[2121]: W0710 00:43:16.278078 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.278153 kubelet[2121]: E0710 00:43:16.278088 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.278393 kubelet[2121]: E0710 00:43:16.278297 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.278393 kubelet[2121]: W0710 00:43:16.278310 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.278393 kubelet[2121]: E0710 00:43:16.278319 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.278958 kubelet[2121]: E0710 00:43:16.278531 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.278958 kubelet[2121]: W0710 00:43:16.278547 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.278958 kubelet[2121]: E0710 00:43:16.278557 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.279254 env[1307]: time="2025-07-10T00:43:16.279218169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x4lvf,Uid:dc5d6c38-2712-4e33-9bde-3313ddf9f1ad,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:16.298098 env[1307]: time="2025-07-10T00:43:16.295895836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:16.298098 env[1307]: time="2025-07-10T00:43:16.295954057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:16.298098 env[1307]: time="2025-07-10T00:43:16.295965478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:16.298098 env[1307]: time="2025-07-10T00:43:16.296270050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97 pid=2615 runtime=io.containerd.runc.v2 Jul 10 00:43:16.335879 env[1307]: time="2025-07-10T00:43:16.335836643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x4lvf,Uid:dc5d6c38-2712-4e33-9bde-3313ddf9f1ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\"" Jul 10 00:43:16.367454 kubelet[2121]: E0710 00:43:16.367420 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.367454 kubelet[2121]: W0710 00:43:16.367440 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.367454 kubelet[2121]: E0710 00:43:16.367460 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.367807 kubelet[2121]: I0710 00:43:16.367486 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1-socket-dir\") pod \"csi-node-driver-8lnpm\" (UID: \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\") " pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:16.367807 kubelet[2121]: E0710 00:43:16.367722 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.367807 kubelet[2121]: W0710 00:43:16.367745 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.367807 kubelet[2121]: E0710 00:43:16.367771 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.367954 kubelet[2121]: I0710 00:43:16.367814 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1-varrun\") pod \"csi-node-driver-8lnpm\" (UID: \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\") " pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:16.368166 kubelet[2121]: E0710 00:43:16.368134 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.368166 kubelet[2121]: W0710 00:43:16.368159 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.368238 kubelet[2121]: E0710 00:43:16.368180 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.368238 kubelet[2121]: I0710 00:43:16.368195 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1-kubelet-dir\") pod \"csi-node-driver-8lnpm\" (UID: \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\") " pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:16.368455 kubelet[2121]: E0710 00:43:16.368414 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.368455 kubelet[2121]: W0710 00:43:16.368446 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.368629 kubelet[2121]: E0710 00:43:16.368482 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.368629 kubelet[2121]: I0710 00:43:16.368513 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677wn\" (UniqueName: \"kubernetes.io/projected/3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1-kube-api-access-677wn\") pod \"csi-node-driver-8lnpm\" (UID: \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\") " pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:16.368924 kubelet[2121]: E0710 00:43:16.368810 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.368924 kubelet[2121]: W0710 00:43:16.368825 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.368924 kubelet[2121]: E0710 00:43:16.368842 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.369919 kubelet[2121]: E0710 00:43:16.369790 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.369919 kubelet[2121]: W0710 00:43:16.369813 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.369919 kubelet[2121]: E0710 00:43:16.369891 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.370126 kubelet[2121]: E0710 00:43:16.370102 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.370173 kubelet[2121]: W0710 00:43:16.370144 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.370341 kubelet[2121]: E0710 00:43:16.370310 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.370760 kubelet[2121]: E0710 00:43:16.370742 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.370760 kubelet[2121]: W0710 00:43:16.370756 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.370856 kubelet[2121]: E0710 00:43:16.370835 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.370938 kubelet[2121]: E0710 00:43:16.370922 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.370938 kubelet[2121]: W0710 00:43:16.370933 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.370995 kubelet[2121]: E0710 00:43:16.370956 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.370995 kubelet[2121]: I0710 00:43:16.370974 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1-registration-dir\") pod \"csi-node-driver-8lnpm\" (UID: \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\") " pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:16.371066 kubelet[2121]: E0710 00:43:16.371054 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.371096 kubelet[2121]: W0710 00:43:16.371066 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.371128 kubelet[2121]: E0710 00:43:16.371096 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.371213 kubelet[2121]: E0710 00:43:16.371194 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.371213 kubelet[2121]: W0710 00:43:16.371206 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.371308 kubelet[2121]: E0710 00:43:16.371218 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.371481 kubelet[2121]: E0710 00:43:16.371462 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.371481 kubelet[2121]: W0710 00:43:16.371475 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.371559 kubelet[2121]: E0710 00:43:16.371487 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.371694 kubelet[2121]: E0710 00:43:16.371677 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.371694 kubelet[2121]: W0710 00:43:16.371692 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.371758 kubelet[2121]: E0710 00:43:16.371701 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.371910 kubelet[2121]: E0710 00:43:16.371893 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.371910 kubelet[2121]: W0710 00:43:16.371903 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.371910 kubelet[2121]: E0710 00:43:16.371911 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.372086 kubelet[2121]: E0710 00:43:16.372068 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.372086 kubelet[2121]: W0710 00:43:16.372079 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.372086 kubelet[2121]: E0710 00:43:16.372088 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.399000 audit[2667]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:16.399000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe6b1e8860 a2=0 a3=7ffe6b1e884c items=0 ppid=2287 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:16.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:16.403000 audit[2667]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:16.403000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6b1e8860 a2=0 a3=0 items=0 ppid=2287 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:16.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:16.472015 kubelet[2121]: E0710 00:43:16.471958 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.472015 kubelet[2121]: W0710 00:43:16.471988 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.472015 kubelet[2121]: E0710 00:43:16.472018 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.472332 kubelet[2121]: E0710 00:43:16.472318 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.472332 kubelet[2121]: W0710 00:43:16.472328 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.472387 kubelet[2121]: E0710 00:43:16.472341 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.472633 kubelet[2121]: E0710 00:43:16.472605 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.472704 kubelet[2121]: W0710 00:43:16.472632 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.472704 kubelet[2121]: E0710 00:43:16.472679 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.472934 kubelet[2121]: E0710 00:43:16.472909 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.472934 kubelet[2121]: W0710 00:43:16.472929 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.473012 kubelet[2121]: E0710 00:43:16.472958 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.473116 kubelet[2121]: E0710 00:43:16.473103 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.473116 kubelet[2121]: W0710 00:43:16.473112 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.473191 kubelet[2121]: E0710 00:43:16.473124 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.473338 kubelet[2121]: E0710 00:43:16.473324 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.473338 kubelet[2121]: W0710 00:43:16.473332 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.473413 kubelet[2121]: E0710 00:43:16.473344 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.473524 kubelet[2121]: E0710 00:43:16.473511 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.473524 kubelet[2121]: W0710 00:43:16.473519 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.473600 kubelet[2121]: E0710 00:43:16.473531 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.473759 kubelet[2121]: E0710 00:43:16.473741 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.473759 kubelet[2121]: W0710 00:43:16.473753 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.473849 kubelet[2121]: E0710 00:43:16.473782 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.473956 kubelet[2121]: E0710 00:43:16.473942 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.473956 kubelet[2121]: W0710 00:43:16.473952 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.474011 kubelet[2121]: E0710 00:43:16.473977 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.474111 kubelet[2121]: E0710 00:43:16.474098 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.474111 kubelet[2121]: W0710 00:43:16.474107 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.474161 kubelet[2121]: E0710 00:43:16.474121 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.474292 kubelet[2121]: E0710 00:43:16.474279 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.474292 kubelet[2121]: W0710 00:43:16.474289 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.474359 kubelet[2121]: E0710 00:43:16.474303 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.474473 kubelet[2121]: E0710 00:43:16.474454 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.474473 kubelet[2121]: W0710 00:43:16.474466 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.474555 kubelet[2121]: E0710 00:43:16.474479 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.474704 kubelet[2121]: E0710 00:43:16.474688 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.474704 kubelet[2121]: W0710 00:43:16.474702 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.474764 kubelet[2121]: E0710 00:43:16.474718 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.474929 kubelet[2121]: E0710 00:43:16.474913 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.474929 kubelet[2121]: W0710 00:43:16.474922 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475021 kubelet[2121]: E0710 00:43:16.474935 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.475100 kubelet[2121]: E0710 00:43:16.475085 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.475129 kubelet[2121]: W0710 00:43:16.475100 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475129 kubelet[2121]: E0710 00:43:16.475113 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.475285 kubelet[2121]: E0710 00:43:16.475268 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.475285 kubelet[2121]: W0710 00:43:16.475280 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475357 kubelet[2121]: E0710 00:43:16.475310 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.475448 kubelet[2121]: E0710 00:43:16.475435 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.475448 kubelet[2121]: W0710 00:43:16.475445 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475499 kubelet[2121]: E0710 00:43:16.475470 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.475672 kubelet[2121]: E0710 00:43:16.475623 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.475672 kubelet[2121]: W0710 00:43:16.475641 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475672 kubelet[2121]: E0710 00:43:16.475665 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.475926 kubelet[2121]: E0710 00:43:16.475913 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.475956 kubelet[2121]: W0710 00:43:16.475925 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.475956 kubelet[2121]: E0710 00:43:16.475938 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.476096 kubelet[2121]: E0710 00:43:16.476087 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.476123 kubelet[2121]: W0710 00:43:16.476095 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.476123 kubelet[2121]: E0710 00:43:16.476107 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.476279 kubelet[2121]: E0710 00:43:16.476266 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.476307 kubelet[2121]: W0710 00:43:16.476278 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.476307 kubelet[2121]: E0710 00:43:16.476292 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.476511 kubelet[2121]: E0710 00:43:16.476496 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.476511 kubelet[2121]: W0710 00:43:16.476507 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.476584 kubelet[2121]: E0710 00:43:16.476525 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.476729 kubelet[2121]: E0710 00:43:16.476715 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.476729 kubelet[2121]: W0710 00:43:16.476725 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.476812 kubelet[2121]: E0710 00:43:16.476738 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.476948 kubelet[2121]: E0710 00:43:16.476935 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.476948 kubelet[2121]: W0710 00:43:16.476946 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.476995 kubelet[2121]: E0710 00:43:16.476959 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.477271 kubelet[2121]: E0710 00:43:16.477241 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.477271 kubelet[2121]: W0710 00:43:16.477254 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.477271 kubelet[2121]: E0710 00:43:16.477269 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:16.483884 kubelet[2121]: E0710 00:43:16.483859 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:16.483884 kubelet[2121]: W0710 00:43:16.483872 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:16.483884 kubelet[2121]: E0710 00:43:16.483881 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:17.421966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122430538.mount: Deactivated successfully. Jul 10 00:43:18.477796 env[1307]: time="2025-07-10T00:43:18.477745184Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:18.479673 kubelet[2121]: E0710 00:43:18.479587 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:18.480014 env[1307]: time="2025-07-10T00:43:18.479705352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:18.481302 env[1307]: time="2025-07-10T00:43:18.481275437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:18.482694 env[1307]: time="2025-07-10T00:43:18.482639529Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:18.483178 env[1307]: time="2025-07-10T00:43:18.483138641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:43:18.484126 env[1307]: time="2025-07-10T00:43:18.484097609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:43:18.494647 env[1307]: time="2025-07-10T00:43:18.493277507Z" level=info msg="CreateContainer within sandbox \"3df9f656a15a0909d43f0d437a100a35779a7c1951bcc86f9ef56f4ee6afe90f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:43:18.509806 env[1307]: time="2025-07-10T00:43:18.509760933Z" level=info msg="CreateContainer within sandbox \"3df9f656a15a0909d43f0d437a100a35779a7c1951bcc86f9ef56f4ee6afe90f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"32d14ea4a959403ecfab80d9223dcae2e1c44e806811844a121c8a390ae88f3e\"" Jul 10 00:43:18.510694 env[1307]: time="2025-07-10T00:43:18.510642495Z" level=info msg="StartContainer for \"32d14ea4a959403ecfab80d9223dcae2e1c44e806811844a121c8a390ae88f3e\"" Jul 10 00:43:18.564423 env[1307]: time="2025-07-10T00:43:18.564341778Z" level=info msg="StartContainer for \"32d14ea4a959403ecfab80d9223dcae2e1c44e806811844a121c8a390ae88f3e\" returns successfully" Jul 10 00:43:19.421047 kubelet[2121]: E0710 00:43:19.420994 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:19.500853 kubelet[2121]: E0710 00:43:19.500799 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.500853 kubelet[2121]: W0710 00:43:19.500836 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.500853 kubelet[2121]: E0710 00:43:19.500857 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.501407 kubelet[2121]: E0710 00:43:19.501107 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.501407 kubelet[2121]: W0710 00:43:19.501128 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.501407 kubelet[2121]: E0710 00:43:19.501138 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.501407 kubelet[2121]: E0710 00:43:19.501273 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.501407 kubelet[2121]: W0710 00:43:19.501281 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.501407 kubelet[2121]: E0710 00:43:19.501289 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.501633 kubelet[2121]: E0710 00:43:19.501429 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.501633 kubelet[2121]: W0710 00:43:19.501438 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.501633 kubelet[2121]: E0710 00:43:19.501447 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.501633 kubelet[2121]: E0710 00:43:19.501584 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.501633 kubelet[2121]: W0710 00:43:19.501592 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.501633 kubelet[2121]: E0710 00:43:19.501601 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.501892 kubelet[2121]: E0710 00:43:19.501748 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.501892 kubelet[2121]: W0710 00:43:19.501757 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.501892 kubelet[2121]: E0710 00:43:19.501765 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.502002 kubelet[2121]: E0710 00:43:19.501903 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.502002 kubelet[2121]: W0710 00:43:19.501911 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.502002 kubelet[2121]: E0710 00:43:19.501918 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.502102 kubelet[2121]: E0710 00:43:19.502038 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.502102 kubelet[2121]: W0710 00:43:19.502044 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.502102 kubelet[2121]: E0710 00:43:19.502051 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.502244 kubelet[2121]: E0710 00:43:19.502213 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.502244 kubelet[2121]: W0710 00:43:19.502232 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.502244 kubelet[2121]: E0710 00:43:19.502241 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.502538 kubelet[2121]: E0710 00:43:19.502522 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.502538 kubelet[2121]: W0710 00:43:19.502532 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.502538 kubelet[2121]: E0710 00:43:19.502541 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.502812 kubelet[2121]: E0710 00:43:19.502782 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.502812 kubelet[2121]: W0710 00:43:19.502806 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.502812 kubelet[2121]: E0710 00:43:19.502817 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.503132 kubelet[2121]: E0710 00:43:19.503083 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.503132 kubelet[2121]: W0710 00:43:19.503120 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.503255 kubelet[2121]: E0710 00:43:19.503162 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.503488 kubelet[2121]: E0710 00:43:19.503467 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.503488 kubelet[2121]: W0710 00:43:19.503482 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.503587 kubelet[2121]: E0710 00:43:19.503494 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.503707 kubelet[2121]: E0710 00:43:19.503690 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.503707 kubelet[2121]: W0710 00:43:19.503704 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.503792 kubelet[2121]: E0710 00:43:19.503716 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.503948 kubelet[2121]: E0710 00:43:19.503929 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.503948 kubelet[2121]: W0710 00:43:19.503943 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.504032 kubelet[2121]: E0710 00:43:19.503955 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.597363 kubelet[2121]: E0710 00:43:19.597326 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.597363 kubelet[2121]: W0710 00:43:19.597344 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.597363 kubelet[2121]: E0710 00:43:19.597362 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.597629 kubelet[2121]: E0710 00:43:19.597580 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.597629 kubelet[2121]: W0710 00:43:19.597587 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.597629 kubelet[2121]: E0710 00:43:19.597601 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.597790 kubelet[2121]: E0710 00:43:19.597775 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.597790 kubelet[2121]: W0710 00:43:19.597786 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.597877 kubelet[2121]: E0710 00:43:19.597801 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.598006 kubelet[2121]: E0710 00:43:19.597989 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.598006 kubelet[2121]: W0710 00:43:19.598002 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.598079 kubelet[2121]: E0710 00:43:19.598018 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.598228 kubelet[2121]: E0710 00:43:19.598210 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.598228 kubelet[2121]: W0710 00:43:19.598223 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.598295 kubelet[2121]: E0710 00:43:19.598240 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.598447 kubelet[2121]: E0710 00:43:19.598421 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.598447 kubelet[2121]: W0710 00:43:19.598438 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.598505 kubelet[2121]: E0710 00:43:19.598454 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.598674 kubelet[2121]: E0710 00:43:19.598630 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.598674 kubelet[2121]: W0710 00:43:19.598648 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.598750 kubelet[2121]: E0710 00:43:19.598682 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.598893 kubelet[2121]: E0710 00:43:19.598879 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.598893 kubelet[2121]: W0710 00:43:19.598889 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.598948 kubelet[2121]: E0710 00:43:19.598902 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599097 kubelet[2121]: E0710 00:43:19.599085 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599126 kubelet[2121]: W0710 00:43:19.599098 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.599126 kubelet[2121]: E0710 00:43:19.599122 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599267 kubelet[2121]: E0710 00:43:19.599255 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599267 kubelet[2121]: W0710 00:43:19.599264 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.599316 kubelet[2121]: E0710 00:43:19.599291 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599442 kubelet[2121]: E0710 00:43:19.599430 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599442 kubelet[2121]: W0710 00:43:19.599439 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.599491 kubelet[2121]: E0710 00:43:19.599455 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599636 kubelet[2121]: E0710 00:43:19.599623 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599636 kubelet[2121]: W0710 00:43:19.599633 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.599702 kubelet[2121]: E0710 00:43:19.599646 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599807 kubelet[2121]: E0710 00:43:19.599792 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599807 kubelet[2121]: W0710 00:43:19.599804 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.599870 kubelet[2121]: E0710 00:43:19.599816 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.599995 kubelet[2121]: E0710 00:43:19.599981 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.599995 kubelet[2121]: W0710 00:43:19.599991 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.600044 kubelet[2121]: E0710 00:43:19.600003 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.600130 kubelet[2121]: E0710 00:43:19.600118 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.600130 kubelet[2121]: W0710 00:43:19.600126 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.600184 kubelet[2121]: E0710 00:43:19.600137 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.600305 kubelet[2121]: E0710 00:43:19.600289 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.600305 kubelet[2121]: W0710 00:43:19.600298 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.600379 kubelet[2121]: E0710 00:43:19.600311 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.600516 kubelet[2121]: E0710 00:43:19.600499 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.600516 kubelet[2121]: W0710 00:43:19.600512 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.600596 kubelet[2121]: E0710 00:43:19.600527 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.600752 kubelet[2121]: E0710 00:43:19.600738 2121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:43:19.600752 kubelet[2121]: W0710 00:43:19.600750 2121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:43:19.600813 kubelet[2121]: E0710 00:43:19.600760 2121 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:43:19.772209 env[1307]: time="2025-07-10T00:43:19.772038565Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:19.776770 env[1307]: time="2025-07-10T00:43:19.776725538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:19.778190 env[1307]: time="2025-07-10T00:43:19.778159081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:19.779637 env[1307]: time="2025-07-10T00:43:19.779568548Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:19.779955 env[1307]: time="2025-07-10T00:43:19.779926430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:43:19.782252 env[1307]: time="2025-07-10T00:43:19.782223198Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:43:19.794548 env[1307]: time="2025-07-10T00:43:19.794364551Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc\"" Jul 10 00:43:19.794915 env[1307]: time="2025-07-10T00:43:19.794882818Z" level=info msg="StartContainer for \"6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc\"" Jul 10 00:43:19.855454 env[1307]: time="2025-07-10T00:43:19.855410029Z" level=info msg="StartContainer for \"6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc\" returns successfully" Jul 10 00:43:19.904521 env[1307]: time="2025-07-10T00:43:19.904459132Z" level=info msg="shim disconnected" id=6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc Jul 10 00:43:19.904521 env[1307]: time="2025-07-10T00:43:19.904518535Z" level=warning msg="cleaning up after shim disconnected" id=6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc namespace=k8s.io Jul 10 00:43:19.904521 env[1307]: time="2025-07-10T00:43:19.904530838Z" level=info msg="cleaning up dead shim" Jul 10 00:43:19.911778 env[1307]: time="2025-07-10T00:43:19.911746742Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:43:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2820 runtime=io.containerd.runc.v2\n" Jul 10 00:43:20.319689 kubelet[2121]: E0710 00:43:20.319606 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:20.424155 kubelet[2121]: I0710 00:43:20.424103 2121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:43:20.424527 kubelet[2121]: E0710 00:43:20.424497 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:20.425017 env[1307]: time="2025-07-10T00:43:20.424968160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:43:20.439639 kubelet[2121]: I0710 00:43:20.439544 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c9d5b547-582gb" podStartSLOduration=2.9722956910000002 podStartE2EDuration="5.439522678s" podCreationTimestamp="2025-07-10 00:43:15 +0000 UTC" firstStartedPulling="2025-07-10 00:43:16.01668049 +0000 UTC m=+16.807060354" lastFinishedPulling="2025-07-10 00:43:18.483907476 +0000 UTC m=+19.274287341" observedRunningTime="2025-07-10 00:43:19.434193809 +0000 UTC m=+20.224573673" watchObservedRunningTime="2025-07-10 00:43:20.439522678 +0000 UTC m=+21.229902542" Jul 10 00:43:20.489054 systemd[1]: run-containerd-runc-k8s.io-6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc-runc.Hs0jZG.mount: Deactivated successfully. Jul 10 00:43:20.489203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b15515f48691be9c4e7c0a0f73c133a112a6ee752baa47d88b9afb7550411cc-rootfs.mount: Deactivated successfully. Jul 10 00:43:22.319469 kubelet[2121]: E0710 00:43:22.319420 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:23.847024 env[1307]: time="2025-07-10T00:43:23.846969129Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:23.849467 env[1307]: time="2025-07-10T00:43:23.849412248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:23.851162 env[1307]: time="2025-07-10T00:43:23.851105289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:23.852684 env[1307]: time="2025-07-10T00:43:23.852634297Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:23.853253 env[1307]: time="2025-07-10T00:43:23.853211335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:43:23.855405 env[1307]: time="2025-07-10T00:43:23.855362257Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:43:23.871459 env[1307]: time="2025-07-10T00:43:23.871376724Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee\"" Jul 10 00:43:23.872147 env[1307]: time="2025-07-10T00:43:23.872095321Z" level=info msg="StartContainer for \"a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee\"" Jul 10 00:43:24.319602 kubelet[2121]: E0710 00:43:24.319548 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:24.626991 env[1307]: time="2025-07-10T00:43:24.626854362Z" level=info msg="StartContainer for \"a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee\" returns successfully" Jul 10 00:43:25.256854 env[1307]: time="2025-07-10T00:43:25.256772879Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:43:25.278731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee-rootfs.mount: Deactivated successfully. Jul 10 00:43:25.281992 env[1307]: time="2025-07-10T00:43:25.281934120Z" level=info msg="shim disconnected" id=a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee Jul 10 00:43:25.282105 env[1307]: time="2025-07-10T00:43:25.281993723Z" level=warning msg="cleaning up after shim disconnected" id=a7e7a212566c77df407cfcf533923c247697b1ee91bd004f9c2de405a102a0ee namespace=k8s.io Jul 10 00:43:25.282105 env[1307]: time="2025-07-10T00:43:25.282003311Z" level=info msg="cleaning up dead shim" Jul 10 00:43:25.289750 env[1307]: time="2025-07-10T00:43:25.289674343Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n" Jul 10 00:43:25.335813 kubelet[2121]: I0710 00:43:25.335678 2121 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:43:25.540170 kubelet[2121]: I0710 00:43:25.539995 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hf25\" (UniqueName: \"kubernetes.io/projected/e01c7c4e-eca5-4812-95c5-200d95d24a32-kube-api-access-7hf25\") pod \"goldmane-58fd7646b9-5prrt\" (UID: \"e01c7c4e-eca5-4812-95c5-200d95d24a32\") " pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.540170 kubelet[2121]: I0710 00:43:25.540053 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-backend-key-pair\") pod \"whisker-586d6954-lvt2n\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " pod="calico-system/whisker-586d6954-lvt2n" Jul 10 00:43:25.540170 kubelet[2121]: I0710 00:43:25.540081 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7f6l\" (UniqueName: \"kubernetes.io/projected/f2307b87-39c7-43c6-8c91-1f74a3de69ab-kube-api-access-f7f6l\") pod \"calico-apiserver-646c7495cd-c8pph\" (UID: \"f2307b87-39c7-43c6-8c91-1f74a3de69ab\") " pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" Jul 10 00:43:25.540170 kubelet[2121]: I0710 00:43:25.540115 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8p8\" (UniqueName: \"kubernetes.io/projected/3c49502b-c641-4c73-b4e5-5955ec9166b1-kube-api-access-4l8p8\") pod \"coredns-7c65d6cfc9-nbqqb\" (UID: \"3c49502b-c641-4c73-b4e5-5955ec9166b1\") " pod="kube-system/coredns-7c65d6cfc9-nbqqb" Jul 10 00:43:25.540455 kubelet[2121]: I0710 00:43:25.540194 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e01c7c4e-eca5-4812-95c5-200d95d24a32-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-5prrt\" (UID: \"e01c7c4e-eca5-4812-95c5-200d95d24a32\") " pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.540455 kubelet[2121]: I0710 00:43:25.540245 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzv6\" (UniqueName: \"kubernetes.io/projected/506b8ef0-d513-4ac6-984e-8cdac4618a2c-kube-api-access-qkzv6\") pod \"whisker-586d6954-lvt2n\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " pod="calico-system/whisker-586d6954-lvt2n" Jul 10 00:43:25.540455 kubelet[2121]: I0710 00:43:25.540272 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fae28b13-a385-46eb-8a07-d49af21f8b28-calico-apiserver-certs\") pod \"calico-apiserver-646c7495cd-vm5gv\" (UID: \"fae28b13-a385-46eb-8a07-d49af21f8b28\") " pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" Jul 10 00:43:25.540455 kubelet[2121]: I0710 00:43:25.540293 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mbn\" (UniqueName: \"kubernetes.io/projected/fae28b13-a385-46eb-8a07-d49af21f8b28-kube-api-access-w4mbn\") pod \"calico-apiserver-646c7495cd-vm5gv\" (UID: \"fae28b13-a385-46eb-8a07-d49af21f8b28\") " pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" Jul 10 00:43:25.540455 kubelet[2121]: I0710 00:43:25.540315 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2307b87-39c7-43c6-8c91-1f74a3de69ab-calico-apiserver-certs\") pod \"calico-apiserver-646c7495cd-c8pph\" (UID: \"f2307b87-39c7-43c6-8c91-1f74a3de69ab\") " pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" Jul 10 00:43:25.540716 kubelet[2121]: I0710 00:43:25.540354 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-ca-bundle\") pod \"whisker-586d6954-lvt2n\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " pod="calico-system/whisker-586d6954-lvt2n" Jul 10 00:43:25.540716 kubelet[2121]: I0710 00:43:25.540394 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28fe5df-eb28-446d-ba1a-53a53e95c947-tigera-ca-bundle\") pod \"calico-kube-controllers-d58fb4f9-n2p7m\" (UID: \"b28fe5df-eb28-446d-ba1a-53a53e95c947\") " pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" Jul 10 00:43:25.540716 kubelet[2121]: I0710 00:43:25.540419 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c49502b-c641-4c73-b4e5-5955ec9166b1-config-volume\") pod \"coredns-7c65d6cfc9-nbqqb\" (UID: \"3c49502b-c641-4c73-b4e5-5955ec9166b1\") " pod="kube-system/coredns-7c65d6cfc9-nbqqb" Jul 10 00:43:25.540716 kubelet[2121]: I0710 00:43:25.540457 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e01c7c4e-eca5-4812-95c5-200d95d24a32-goldmane-key-pair\") pod \"goldmane-58fd7646b9-5prrt\" (UID: \"e01c7c4e-eca5-4812-95c5-200d95d24a32\") " pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.540716 kubelet[2121]: I0710 00:43:25.540482 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04b4ccc-80be-498b-a53a-b961975a280d-config-volume\") pod \"coredns-7c65d6cfc9-hlrk9\" (UID: \"f04b4ccc-80be-498b-a53a-b961975a280d\") " pod="kube-system/coredns-7c65d6cfc9-hlrk9" Jul 10 00:43:25.540895 kubelet[2121]: I0710 00:43:25.540525 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01c7c4e-eca5-4812-95c5-200d95d24a32-config\") pod \"goldmane-58fd7646b9-5prrt\" (UID: \"e01c7c4e-eca5-4812-95c5-200d95d24a32\") " pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.540895 kubelet[2121]: I0710 00:43:25.540551 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xwc\" (UniqueName: \"kubernetes.io/projected/f04b4ccc-80be-498b-a53a-b961975a280d-kube-api-access-p5xwc\") pod \"coredns-7c65d6cfc9-hlrk9\" (UID: \"f04b4ccc-80be-498b-a53a-b961975a280d\") " pod="kube-system/coredns-7c65d6cfc9-hlrk9" Jul 10 00:43:25.540895 kubelet[2121]: I0710 00:43:25.540667 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hqm\" (UniqueName: \"kubernetes.io/projected/b28fe5df-eb28-446d-ba1a-53a53e95c947-kube-api-access-r5hqm\") pod \"calico-kube-controllers-d58fb4f9-n2p7m\" (UID: \"b28fe5df-eb28-446d-ba1a-53a53e95c947\") " pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" Jul 10 00:43:25.636459 env[1307]: time="2025-07-10T00:43:25.636385100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:43:25.668858 env[1307]: time="2025-07-10T00:43:25.668787557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5prrt,Uid:e01c7c4e-eca5-4812-95c5-200d95d24a32,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:25.675748 env[1307]: time="2025-07-10T00:43:25.675692663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-586d6954-lvt2n,Uid:506b8ef0-d513-4ac6-984e-8cdac4618a2c,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:25.676026 env[1307]: time="2025-07-10T00:43:25.676004886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d58fb4f9-n2p7m,Uid:b28fe5df-eb28-446d-ba1a-53a53e95c947,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:25.676105 env[1307]: time="2025-07-10T00:43:25.676081582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-c8pph,Uid:f2307b87-39c7-43c6-8c91-1f74a3de69ab,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:43:25.679473 kubelet[2121]: E0710 00:43:25.679449 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:25.679896 env[1307]: time="2025-07-10T00:43:25.679872338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbqqb,Uid:3c49502b-c641-4c73-b4e5-5955ec9166b1,Namespace:kube-system,Attempt:0,}" Jul 10 00:43:25.681936 env[1307]: time="2025-07-10T00:43:25.681891947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-vm5gv,Uid:fae28b13-a385-46eb-8a07-d49af21f8b28,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:43:25.817094 env[1307]: time="2025-07-10T00:43:25.816939680Z" level=error msg="Failed to destroy network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.817732 env[1307]: time="2025-07-10T00:43:25.817705556Z" level=error msg="encountered an error cleaning up failed sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.817869 env[1307]: time="2025-07-10T00:43:25.817829100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d58fb4f9-n2p7m,Uid:b28fe5df-eb28-446d-ba1a-53a53e95c947,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.818325 kubelet[2121]: E0710 00:43:25.818266 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.818415 kubelet[2121]: E0710 00:43:25.818357 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" Jul 10 00:43:25.818415 kubelet[2121]: E0710 00:43:25.818388 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" Jul 10 00:43:25.818470 kubelet[2121]: E0710 00:43:25.818435 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d58fb4f9-n2p7m_calico-system(b28fe5df-eb28-446d-ba1a-53a53e95c947)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d58fb4f9-n2p7m_calico-system(b28fe5df-eb28-446d-ba1a-53a53e95c947)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" podUID="b28fe5df-eb28-446d-ba1a-53a53e95c947" Jul 10 00:43:25.828837 env[1307]: time="2025-07-10T00:43:25.828776121Z" level=error msg="Failed to destroy network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.828837 env[1307]: time="2025-07-10T00:43:25.828784206Z" level=error msg="Failed to destroy network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829037 env[1307]: time="2025-07-10T00:43:25.828784226Z" level=error msg="Failed to destroy network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829254 env[1307]: time="2025-07-10T00:43:25.829222849Z" level=error msg="encountered an error cleaning up failed sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829307 env[1307]: time="2025-07-10T00:43:25.829245242Z" level=error msg="encountered an error cleaning up failed sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829307 env[1307]: time="2025-07-10T00:43:25.829259940Z" level=error msg="encountered an error cleaning up failed sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829307 env[1307]: time="2025-07-10T00:43:25.829276391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5prrt,Uid:e01c7c4e-eca5-4812-95c5-200d95d24a32,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829307 env[1307]: time="2025-07-10T00:43:25.829295297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbqqb,Uid:3c49502b-c641-4c73-b4e5-5955ec9166b1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829446 env[1307]: time="2025-07-10T00:43:25.829297371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-586d6954-lvt2n,Uid:506b8ef0-d513-4ac6-984e-8cdac4618a2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829572 kubelet[2121]: E0710 00:43:25.829515 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829572 kubelet[2121]: E0710 00:43:25.829552 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829695 kubelet[2121]: E0710 00:43:25.829577 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nbqqb" Jul 10 00:43:25.829695 kubelet[2121]: E0710 00:43:25.829515 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.829695 kubelet[2121]: E0710 00:43:25.829597 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.829695 kubelet[2121]: E0710 00:43:25.829598 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nbqqb" Jul 10 00:43:25.829802 kubelet[2121]: E0710 00:43:25.829609 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5prrt" Jul 10 00:43:25.829802 kubelet[2121]: E0710 00:43:25.829638 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-5prrt_calico-system(e01c7c4e-eca5-4812-95c5-200d95d24a32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-5prrt_calico-system(e01c7c4e-eca5-4812-95c5-200d95d24a32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5prrt" podUID="e01c7c4e-eca5-4812-95c5-200d95d24a32" Jul 10 00:43:25.829802 kubelet[2121]: E0710 00:43:25.829637 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nbqqb_kube-system(3c49502b-c641-4c73-b4e5-5955ec9166b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nbqqb_kube-system(3c49502b-c641-4c73-b4e5-5955ec9166b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nbqqb" podUID="3c49502b-c641-4c73-b4e5-5955ec9166b1" Jul 10 00:43:25.829932 kubelet[2121]: E0710 00:43:25.829580 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-586d6954-lvt2n" Jul 10 00:43:25.829932 kubelet[2121]: E0710 00:43:25.829682 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-586d6954-lvt2n" Jul 10 00:43:25.829932 kubelet[2121]: E0710 00:43:25.829702 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-586d6954-lvt2n_calico-system(506b8ef0-d513-4ac6-984e-8cdac4618a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-586d6954-lvt2n_calico-system(506b8ef0-d513-4ac6-984e-8cdac4618a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-586d6954-lvt2n" podUID="506b8ef0-d513-4ac6-984e-8cdac4618a2c" Jul 10 00:43:25.844205 env[1307]: time="2025-07-10T00:43:25.844135668Z" level=error msg="Failed to destroy network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.844509 env[1307]: time="2025-07-10T00:43:25.844480384Z" level=error msg="encountered an error cleaning up failed sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.844560 env[1307]: time="2025-07-10T00:43:25.844536029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-vm5gv,Uid:fae28b13-a385-46eb-8a07-d49af21f8b28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.844817 kubelet[2121]: E0710 00:43:25.844762 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.844888 kubelet[2121]: E0710 00:43:25.844831 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" Jul 10 00:43:25.844888 kubelet[2121]: E0710 00:43:25.844860 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" Jul 10 00:43:25.844952 kubelet[2121]: E0710 00:43:25.844906 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646c7495cd-vm5gv_calico-apiserver(fae28b13-a385-46eb-8a07-d49af21f8b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646c7495cd-vm5gv_calico-apiserver(fae28b13-a385-46eb-8a07-d49af21f8b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" podUID="fae28b13-a385-46eb-8a07-d49af21f8b28" Jul 10 00:43:25.846994 env[1307]: time="2025-07-10T00:43:25.846934619Z" level=error msg="Failed to destroy network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.847299 env[1307]: time="2025-07-10T00:43:25.847272611Z" level=error msg="encountered an error cleaning up failed sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.847334 env[1307]: time="2025-07-10T00:43:25.847314381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-c8pph,Uid:f2307b87-39c7-43c6-8c91-1f74a3de69ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.847547 kubelet[2121]: E0710 00:43:25.847511 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:25.847638 kubelet[2121]: E0710 00:43:25.847567 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" Jul 10 00:43:25.847773 kubelet[2121]: E0710 00:43:25.847590 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" Jul 10 00:43:25.847851 kubelet[2121]: E0710 00:43:25.847815 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646c7495cd-c8pph_calico-apiserver(f2307b87-39c7-43c6-8c91-1f74a3de69ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646c7495cd-c8pph_calico-apiserver(f2307b87-39c7-43c6-8c91-1f74a3de69ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" podUID="f2307b87-39c7-43c6-8c91-1f74a3de69ab" Jul 10 00:43:25.957413 kubelet[2121]: E0710 00:43:25.957347 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:25.957980 env[1307]: time="2025-07-10T00:43:25.957934787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlrk9,Uid:f04b4ccc-80be-498b-a53a-b961975a280d,Namespace:kube-system,Attempt:0,}" Jul 10 00:43:26.019348 env[1307]: time="2025-07-10T00:43:26.019264380Z" level=error msg="Failed to destroy network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.019718 env[1307]: time="2025-07-10T00:43:26.019675532Z" level=error msg="encountered an error cleaning up failed sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.019775 env[1307]: time="2025-07-10T00:43:26.019748309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlrk9,Uid:f04b4ccc-80be-498b-a53a-b961975a280d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.020079 kubelet[2121]: E0710 00:43:26.020031 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.020168 kubelet[2121]: E0710 00:43:26.020110 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlrk9" Jul 10 00:43:26.020168 kubelet[2121]: E0710 00:43:26.020131 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlrk9" Jul 10 00:43:26.020242 kubelet[2121]: E0710 00:43:26.020178 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hlrk9_kube-system(f04b4ccc-80be-498b-a53a-b961975a280d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hlrk9_kube-system(f04b4ccc-80be-498b-a53a-b961975a280d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlrk9" podUID="f04b4ccc-80be-498b-a53a-b961975a280d" Jul 10 00:43:26.303837 kubelet[2121]: I0710 00:43:26.303782 2121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:43:26.304224 kubelet[2121]: E0710 00:43:26.304201 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:26.324098 env[1307]: time="2025-07-10T00:43:26.324046896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lnpm,Uid:3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:26.330000 audit[3147]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:26.332459 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 10 00:43:26.332596 kernel: audit: type=1325 audit(1752108206.330:282): table=filter:97 family=2 entries=21 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:26.330000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb29b7110 a2=0 a3=7fffb29b70fc items=0 ppid=2287 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:26.339578 kernel: audit: type=1300 audit(1752108206.330:282): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb29b7110 a2=0 a3=7fffb29b70fc items=0 ppid=2287 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:26.339644 kernel: audit: type=1327 audit(1752108206.330:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:26.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:26.343000 audit[3147]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:26.346680 kernel: audit: type=1325 audit(1752108206.343:283): table=nat:98 family=2 entries=19 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:26.346730 kernel: audit: type=1300 audit(1752108206.343:283): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb29b7110 a2=0 a3=7fffb29b70fc items=0 ppid=2287 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:26.343000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb29b7110 a2=0 a3=7fffb29b70fc items=0 ppid=2287 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:26.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:26.353573 kernel: audit: type=1327 audit(1752108206.343:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:26.382242 env[1307]: time="2025-07-10T00:43:26.382157612Z" level=error msg="Failed to destroy network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.382567 env[1307]: time="2025-07-10T00:43:26.382531363Z" level=error msg="encountered an error cleaning up failed sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.382605 env[1307]: time="2025-07-10T00:43:26.382580276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lnpm,Uid:3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.382915 kubelet[2121]: E0710 00:43:26.382863 2121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.383291 kubelet[2121]: E0710 00:43:26.382933 2121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:26.383291 kubelet[2121]: E0710 00:43:26.382962 2121 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lnpm" Jul 10 00:43:26.383291 kubelet[2121]: E0710 00:43:26.383010 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8lnpm_calico-system(3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8lnpm_calico-system(3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:26.385327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502-shm.mount: Deactivated successfully. Jul 10 00:43:26.638955 kubelet[2121]: I0710 00:43:26.638828 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:26.639730 env[1307]: time="2025-07-10T00:43:26.639674039Z" level=info msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" Jul 10 00:43:26.641714 kubelet[2121]: I0710 00:43:26.641684 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:26.642380 env[1307]: time="2025-07-10T00:43:26.642346528Z" level=info msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" Jul 10 00:43:26.643541 kubelet[2121]: I0710 00:43:26.643505 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:26.644184 env[1307]: time="2025-07-10T00:43:26.644128775Z" level=info msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" Jul 10 00:43:26.646429 kubelet[2121]: I0710 00:43:26.646074 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:26.646678 env[1307]: time="2025-07-10T00:43:26.646612776Z" level=info msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" Jul 10 00:43:26.647837 kubelet[2121]: I0710 00:43:26.647808 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:26.648458 env[1307]: time="2025-07-10T00:43:26.648409650Z" level=info msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" Jul 10 00:43:26.649767 kubelet[2121]: I0710 00:43:26.649740 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:26.650310 env[1307]: time="2025-07-10T00:43:26.650280715Z" level=info msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" Jul 10 00:43:26.651620 kubelet[2121]: I0710 00:43:26.651570 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:26.652598 env[1307]: time="2025-07-10T00:43:26.652567380Z" level=info msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" Jul 10 00:43:26.653341 kubelet[2121]: I0710 00:43:26.653308 2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:26.653528 kubelet[2121]: E0710 00:43:26.653502 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:26.655805 env[1307]: time="2025-07-10T00:43:26.655758194Z" level=info msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" Jul 10 00:43:26.688907 env[1307]: time="2025-07-10T00:43:26.688835722Z" level=error msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" failed" error="failed to destroy network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.689432 kubelet[2121]: E0710 00:43:26.689265 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:26.689432 kubelet[2121]: E0710 00:43:26.689324 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef"} Jul 10 00:43:26.689432 kubelet[2121]: E0710 00:43:26.689380 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fae28b13-a385-46eb-8a07-d49af21f8b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.689432 kubelet[2121]: E0710 00:43:26.689403 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fae28b13-a385-46eb-8a07-d49af21f8b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" podUID="fae28b13-a385-46eb-8a07-d49af21f8b28" Jul 10 00:43:26.690293 env[1307]: time="2025-07-10T00:43:26.690219481Z" level=error msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" failed" error="failed to destroy network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.690681 kubelet[2121]: E0710 00:43:26.690534 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:26.690681 kubelet[2121]: E0710 00:43:26.690564 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749"} Jul 10 00:43:26.690681 kubelet[2121]: E0710 00:43:26.690608 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2307b87-39c7-43c6-8c91-1f74a3de69ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.690681 kubelet[2121]: E0710 00:43:26.690627 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2307b87-39c7-43c6-8c91-1f74a3de69ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" podUID="f2307b87-39c7-43c6-8c91-1f74a3de69ab" Jul 10 00:43:26.714141 env[1307]: time="2025-07-10T00:43:26.714058402Z" level=error msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" failed" error="failed to destroy network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.714141 env[1307]: time="2025-07-10T00:43:26.714077107Z" level=error msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" failed" error="failed to destroy network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.714775 kubelet[2121]: E0710 00:43:26.714485 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:26.714775 kubelet[2121]: E0710 00:43:26.714594 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df"} Jul 10 00:43:26.714775 kubelet[2121]: E0710 00:43:26.714641 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e01c7c4e-eca5-4812-95c5-200d95d24a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.714775 kubelet[2121]: E0710 00:43:26.714730 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e01c7c4e-eca5-4812-95c5-200d95d24a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5prrt" podUID="e01c7c4e-eca5-4812-95c5-200d95d24a32" Jul 10 00:43:26.715008 kubelet[2121]: E0710 00:43:26.714800 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:26.715008 kubelet[2121]: E0710 00:43:26.714855 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b"} Jul 10 00:43:26.715008 kubelet[2121]: E0710 00:43:26.714887 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c49502b-c641-4c73-b4e5-5955ec9166b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.715008 kubelet[2121]: E0710 00:43:26.714905 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c49502b-c641-4c73-b4e5-5955ec9166b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nbqqb" podUID="3c49502b-c641-4c73-b4e5-5955ec9166b1" Jul 10 00:43:26.717860 env[1307]: time="2025-07-10T00:43:26.717801074Z" level=error msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" failed" error="failed to destroy network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.718201 kubelet[2121]: E0710 00:43:26.718167 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:26.718260 kubelet[2121]: E0710 00:43:26.718202 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502"} Jul 10 00:43:26.718260 kubelet[2121]: E0710 00:43:26.718225 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.718260 kubelet[2121]: E0710 00:43:26.718240 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8lnpm" podUID="3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1" Jul 10 00:43:26.719126 env[1307]: time="2025-07-10T00:43:26.719068813Z" level=error msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" failed" error="failed to destroy network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.719390 kubelet[2121]: E0710 00:43:26.719330 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:26.719569 kubelet[2121]: E0710 00:43:26.719416 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761"} Jul 10 00:43:26.719569 kubelet[2121]: E0710 00:43:26.719465 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f04b4ccc-80be-498b-a53a-b961975a280d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.719569 kubelet[2121]: E0710 00:43:26.719498 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f04b4ccc-80be-498b-a53a-b961975a280d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlrk9" podUID="f04b4ccc-80be-498b-a53a-b961975a280d" Jul 10 00:43:26.726306 env[1307]: time="2025-07-10T00:43:26.726259608Z" level=error msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" failed" error="failed to destroy network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.726522 kubelet[2121]: E0710 00:43:26.726482 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:26.726595 kubelet[2121]: E0710 00:43:26.726530 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd"} Jul 10 00:43:26.726595 kubelet[2121]: E0710 00:43:26.726561 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b28fe5df-eb28-446d-ba1a-53a53e95c947\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.726595 kubelet[2121]: E0710 00:43:26.726582 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b28fe5df-eb28-446d-ba1a-53a53e95c947\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" podUID="b28fe5df-eb28-446d-ba1a-53a53e95c947" Jul 10 00:43:26.734561 env[1307]: time="2025-07-10T00:43:26.734504034Z" level=error msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" failed" error="failed to destroy network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:43:26.734806 kubelet[2121]: E0710 00:43:26.734760 2121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:26.734893 kubelet[2121]: E0710 00:43:26.734823 2121 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1"} Jul 10 00:43:26.734893 kubelet[2121]: E0710 00:43:26.734868 2121 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:43:26.734983 kubelet[2121]: E0710 00:43:26.734898 2121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-586d6954-lvt2n" podUID="506b8ef0-d513-4ac6-984e-8cdac4618a2c" Jul 10 00:43:31.912818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838384957.mount: Deactivated successfully. Jul 10 00:43:32.617862 env[1307]: time="2025-07-10T00:43:32.617726886Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:32.620267 env[1307]: time="2025-07-10T00:43:32.620221237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:32.621743 env[1307]: time="2025-07-10T00:43:32.621710824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:32.623308 env[1307]: time="2025-07-10T00:43:32.623264721Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:32.623773 env[1307]: time="2025-07-10T00:43:32.623728490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:43:32.631475 env[1307]: time="2025-07-10T00:43:32.631422426Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:43:32.653082 env[1307]: time="2025-07-10T00:43:32.653024927Z" level=info msg="CreateContainer within sandbox \"80beda439016327b31381a648d020816207de4af77de5911dd69d56010363d97\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72f61a0cd775e897954af8dbc204a3b42cbfa4ba9945a116e4b1c49e8375290a\"" Jul 10 00:43:32.653753 env[1307]: time="2025-07-10T00:43:32.653709966Z" level=info msg="StartContainer for \"72f61a0cd775e897954af8dbc204a3b42cbfa4ba9945a116e4b1c49e8375290a\"" Jul 10 00:43:33.086525 env[1307]: time="2025-07-10T00:43:33.086461010Z" level=info msg="StartContainer for \"72f61a0cd775e897954af8dbc204a3b42cbfa4ba9945a116e4b1c49e8375290a\" returns successfully" Jul 10 00:43:33.117116 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:43:33.117303 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:43:33.215815 env[1307]: time="2025-07-10T00:43:33.215752324Z" level=info msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.410 [INFO][3390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.411 [INFO][3390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" iface="eth0" netns="/var/run/netns/cni-ef30e368-8b45-83cb-1acd-8420ea192c9c" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.411 [INFO][3390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" iface="eth0" netns="/var/run/netns/cni-ef30e368-8b45-83cb-1acd-8420ea192c9c" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.412 [INFO][3390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" iface="eth0" netns="/var/run/netns/cni-ef30e368-8b45-83cb-1acd-8420ea192c9c" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.412 [INFO][3390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.412 [INFO][3390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.467 [INFO][3400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.467 [INFO][3400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.468 [INFO][3400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.474 [WARNING][3400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.474 [INFO][3400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.476 [INFO][3400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:33.480060 env[1307]: 2025-07-10 00:43:33.478 [INFO][3390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:43:33.480787 env[1307]: time="2025-07-10T00:43:33.480727495Z" level=info msg="TearDown network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" successfully" Jul 10 00:43:33.480787 env[1307]: time="2025-07-10T00:43:33.480779834Z" level=info msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" returns successfully" Jul 10 00:43:33.483368 systemd[1]: run-netns-cni\x2def30e368\x2d8b45\x2d83cb\x2d1acd\x2d8420ea192c9c.mount: Deactivated successfully. Jul 10 00:43:33.600232 kubelet[2121]: I0710 00:43:33.600171 2121 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-backend-key-pair\") pod \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " Jul 10 00:43:33.600232 kubelet[2121]: I0710 00:43:33.600220 2121 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-ca-bundle\") pod \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " Jul 10 00:43:33.600232 kubelet[2121]: I0710 00:43:33.600237 2121 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzv6\" (UniqueName: \"kubernetes.io/projected/506b8ef0-d513-4ac6-984e-8cdac4618a2c-kube-api-access-qkzv6\") pod \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\" (UID: \"506b8ef0-d513-4ac6-984e-8cdac4618a2c\") " Jul 10 00:43:33.625393 kubelet[2121]: I0710 00:43:33.625343 2121 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "506b8ef0-d513-4ac6-984e-8cdac4618a2c" (UID: "506b8ef0-d513-4ac6-984e-8cdac4618a2c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:43:33.629254 kubelet[2121]: I0710 00:43:33.629220 2121 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506b8ef0-d513-4ac6-984e-8cdac4618a2c-kube-api-access-qkzv6" (OuterVolumeSpecName: "kube-api-access-qkzv6") pod "506b8ef0-d513-4ac6-984e-8cdac4618a2c" (UID: "506b8ef0-d513-4ac6-984e-8cdac4618a2c"). InnerVolumeSpecName "kube-api-access-qkzv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:43:33.629370 kubelet[2121]: I0710 00:43:33.629333 2121 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "506b8ef0-d513-4ac6-984e-8cdac4618a2c" (UID: "506b8ef0-d513-4ac6-984e-8cdac4618a2c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:43:33.631257 systemd[1]: var-lib-kubelet-pods-506b8ef0\x2dd513\x2d4ac6\x2d984e\x2d8cdac4618a2c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:43:33.631402 systemd[1]: var-lib-kubelet-pods-506b8ef0\x2dd513\x2d4ac6\x2d984e\x2d8cdac4618a2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkzv6.mount: Deactivated successfully. Jul 10 00:43:33.701422 kubelet[2121]: I0710 00:43:33.701368 2121 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzv6\" (UniqueName: \"kubernetes.io/projected/506b8ef0-d513-4ac6-984e-8cdac4618a2c-kube-api-access-qkzv6\") on node \"localhost\" DevicePath \"\"" Jul 10 00:43:33.701580 kubelet[2121]: I0710 00:43:33.701430 2121 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:43:33.701580 kubelet[2121]: I0710 00:43:33.701461 2121 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/506b8ef0-d513-4ac6-984e-8cdac4618a2c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:43:33.821477 kubelet[2121]: I0710 00:43:33.821307 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x4lvf" podStartSLOduration=2.533934972 podStartE2EDuration="18.821288883s" podCreationTimestamp="2025-07-10 00:43:15 +0000 UTC" firstStartedPulling="2025-07-10 00:43:16.33712828 +0000 UTC m=+17.127508144" lastFinishedPulling="2025-07-10 00:43:32.624482191 +0000 UTC m=+33.414862055" observedRunningTime="2025-07-10 00:43:33.808742437 +0000 UTC m=+34.599122331" watchObservedRunningTime="2025-07-10 00:43:33.821288883 +0000 UTC m=+34.611668747" Jul 10 00:43:34.004994 kubelet[2121]: I0710 00:43:34.004936 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7n7h\" (UniqueName: \"kubernetes.io/projected/26af1667-89ed-4028-a44e-28547facf659-kube-api-access-p7n7h\") pod \"whisker-6564ff4954-6ghpb\" (UID: \"26af1667-89ed-4028-a44e-28547facf659\") " pod="calico-system/whisker-6564ff4954-6ghpb" Jul 10 00:43:34.004994 kubelet[2121]: I0710 00:43:34.004987 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26af1667-89ed-4028-a44e-28547facf659-whisker-backend-key-pair\") pod \"whisker-6564ff4954-6ghpb\" (UID: \"26af1667-89ed-4028-a44e-28547facf659\") " pod="calico-system/whisker-6564ff4954-6ghpb" Jul 10 00:43:34.005206 kubelet[2121]: I0710 00:43:34.005014 2121 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26af1667-89ed-4028-a44e-28547facf659-whisker-ca-bundle\") pod \"whisker-6564ff4954-6ghpb\" (UID: \"26af1667-89ed-4028-a44e-28547facf659\") " pod="calico-system/whisker-6564ff4954-6ghpb" Jul 10 00:43:34.153718 env[1307]: time="2025-07-10T00:43:34.153536642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564ff4954-6ghpb,Uid:26af1667-89ed-4028-a44e-28547facf659,Namespace:calico-system,Attempt:0,}" Jul 10 00:43:34.270025 systemd-networkd[1071]: cali56a6d41a159: Link UP Jul 10 00:43:34.272211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:43:34.272261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali56a6d41a159: link becomes ready Jul 10 00:43:34.272376 systemd-networkd[1071]: cali56a6d41a159: Gained carrier Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.190 [INFO][3422] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.203 [INFO][3422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6564ff4954--6ghpb-eth0 whisker-6564ff4954- calico-system 26af1667-89ed-4028-a44e-28547facf659 939 0 2025-07-10 00:43:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6564ff4954 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6564ff4954-6ghpb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali56a6d41a159 [] [] }} ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.203 [INFO][3422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.227 [INFO][3437] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" HandleID="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Workload="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.227 [INFO][3437] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" HandleID="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Workload="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000494af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6564ff4954-6ghpb", "timestamp":"2025-07-10 00:43:34.227630958 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.228 [INFO][3437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.228 [INFO][3437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.228 [INFO][3437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.235 [INFO][3437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.240 [INFO][3437] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.245 [INFO][3437] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.246 [INFO][3437] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.248 [INFO][3437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.248 [INFO][3437] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.250 [INFO][3437] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5 Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.253 [INFO][3437] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.260 [INFO][3437] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.260 [INFO][3437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" host="localhost" Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.260 [INFO][3437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:34.285203 env[1307]: 2025-07-10 00:43:34.260 [INFO][3437] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" HandleID="k8s-pod-network.8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Workload="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.262 [INFO][3422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6564ff4954--6ghpb-eth0", GenerateName:"whisker-6564ff4954-", Namespace:"calico-system", SelfLink:"", UID:"26af1667-89ed-4028-a44e-28547facf659", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6564ff4954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6564ff4954-6ghpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56a6d41a159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.262 [INFO][3422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.262 [INFO][3422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56a6d41a159 ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.273 [INFO][3422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.273 [INFO][3422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6564ff4954--6ghpb-eth0", GenerateName:"whisker-6564ff4954-", Namespace:"calico-system", SelfLink:"", UID:"26af1667-89ed-4028-a44e-28547facf659", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6564ff4954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5", Pod:"whisker-6564ff4954-6ghpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56a6d41a159", MAC:"76:a1:2b:63:53:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:34.285796 env[1307]: 2025-07-10 00:43:34.283 [INFO][3422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5" Namespace="calico-system" Pod="whisker-6564ff4954-6ghpb" WorkloadEndpoint="localhost-k8s-whisker--6564ff4954--6ghpb-eth0" Jul 10 00:43:34.295177 env[1307]: time="2025-07-10T00:43:34.295106615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:34.295177 env[1307]: time="2025-07-10T00:43:34.295153825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:34.295177 env[1307]: time="2025-07-10T00:43:34.295167641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:34.295398 env[1307]: time="2025-07-10T00:43:34.295305883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5 pid=3460 runtime=io.containerd.runc.v2 Jul 10 00:43:34.320565 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:34.344773 env[1307]: time="2025-07-10T00:43:34.344714471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564ff4954-6ghpb,Uid:26af1667-89ed-4028-a44e-28547facf659,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5\"" Jul 10 00:43:34.346582 env[1307]: time="2025-07-10T00:43:34.346535323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:43:34.675080 kubelet[2121]: I0710 00:43:34.675039 2121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:43:34.841000 audit[3534]: AVC avc: denied { write } for pid=3534 comm="tee" name="fd" dev="proc" ino=23262 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.866695 kernel: audit: type=1400 audit(1752108214.841:284): avc: denied { write } for pid=3534 comm="tee" name="fd" dev="proc" ino=23262 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.841000 audit[3534]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb31357de a2=241 a3=1b6 items=1 ppid=3508 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.841000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 00:43:34.877397 kernel: audit: type=1300 audit(1752108214.841:284): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb31357de a2=241 a3=1b6 items=1 ppid=3508 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.877443 kernel: audit: type=1307 audit(1752108214.841:284): cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 00:43:34.877469 kernel: audit: type=1302 audit(1752108214.841:284): item=0 name="/dev/fd/63" inode=24078 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.841000 audit: PATH item=0 name="/dev/fd/63" inode=24078 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.852000 audit[3545]: AVC avc: denied { write } for pid=3545 comm="tee" name="fd" dev="proc" ino=24093 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.886160 kernel: audit: type=1327 audit(1752108214.841:284): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.886210 kernel: audit: type=1400 audit(1752108214.852:285): avc: denied { write } for pid=3545 comm="tee" name="fd" dev="proc" ino=24093 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.886246 kernel: audit: type=1300 audit(1752108214.852:285): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb1fa77ef a2=241 a3=1b6 items=1 ppid=3514 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.852000 audit[3545]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb1fa77ef a2=241 a3=1b6 items=1 ppid=3514 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.852000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 10 00:43:34.895979 kernel: audit: type=1307 audit(1752108214.852:285): cwd="/etc/service/enabled/cni/log" Jul 10 00:43:34.896028 kernel: audit: type=1302 audit(1752108214.852:285): item=0 name="/dev/fd/63" inode=25714 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.852000 audit: PATH item=0 name="/dev/fd/63" inode=25714 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.898806 kernel: audit: type=1327 audit(1752108214.852:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.869000 audit[3579]: AVC avc: denied { write } for pid=3579 comm="tee" name="fd" dev="proc" ino=24102 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.869000 audit[3579]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff3e6a7dd a2=241 a3=1b6 items=1 ppid=3509 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.869000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 10 00:43:34.869000 audit: PATH item=0 name="/dev/fd/63" inode=25720 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.889000 audit[3567]: AVC avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=23275 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.889000 audit[3567]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe97db97ed a2=241 a3=1b6 items=1 ppid=3512 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.889000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 10 00:43:34.889000 audit: PATH item=0 name="/dev/fd/63" inode=23270 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.899000 audit[3586]: AVC avc: denied { write } for pid=3586 comm="tee" name="fd" dev="proc" ino=23280 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.899000 audit[3586]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd873d77ed a2=241 a3=1b6 items=1 ppid=3516 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.899000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 10 00:43:34.899000 audit: PATH item=0 name="/dev/fd/63" inode=23277 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.901000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=24108 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.901000 audit[3577]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff484c97ed a2=241 a3=1b6 items=1 ppid=3505 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.901000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 10 00:43:34.901000 audit: PATH item=0 name="/dev/fd/63" inode=25242 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:34.906000 audit[3571]: AVC avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=25723 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:43:34.906000 audit[3571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4b50d7ee a2=241 a3=1b6 items=1 ppid=3504 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:34.906000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 10 00:43:34.906000 audit: PATH item=0 name="/dev/fd/63" inode=25717 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:43:34.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.083000 audit: BPF prog-id=10 op=LOAD Jul 10 00:43:35.083000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0c1772a0 a2=98 a3=1fffffffffffffff items=0 ppid=3518 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.083000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:43:35.084000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit: BPF prog-id=11 op=LOAD Jul 10 00:43:35.084000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0c177180 a2=94 a3=3 items=0 ppid=3518 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.084000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:43:35.084000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit: BPF prog-id=12 op=LOAD Jul 10 00:43:35.084000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0c1771c0 a2=94 a3=7fff0c1773a0 items=0 ppid=3518 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.084000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:43:35.084000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:43:35.084000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.084000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff0c177290 a2=50 a3=a000000085 items=0 ppid=3518 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.084000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.086000 audit: BPF prog-id=13 op=LOAD Jul 10 00:43:35.086000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc22439110 a2=98 a3=3 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.086000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit: BPF prog-id=14 op=LOAD Jul 10 00:43:35.087000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc22438f00 a2=94 a3=54428f items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.087000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.087000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.087000 audit: BPF prog-id=15 op=LOAD Jul 10 00:43:35.087000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc22438f30 a2=94 a3=2 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.087000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.087000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.192000 audit: BPF prog-id=16 op=LOAD Jul 10 00:43:35.192000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc22438df0 a2=94 a3=1 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.192000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.193000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:43:35.193000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.193000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc22438ec0 a2=50 a3=7ffc22438fa0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e00 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc22438e30 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc22438d40 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e50 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e30 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e20 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e50 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc22438e30 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc22438e50 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc22438e20 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc22438e90 a2=28 a3=0 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc22438c40 a2=50 a3=1 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.201000 audit: BPF prog-id=17 op=LOAD Jul 10 00:43:35.201000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc22438c40 a2=94 a3=5 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.202000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc22438cf0 a2=50 a3=1 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc22438e10 a2=4 a3=38 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { confidentiality } for pid=3605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.202000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc22438e60 a2=94 a3=6 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { confidentiality } for pid=3605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.202000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc22438610 a2=94 a3=88 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { perfmon } for pid=3605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { bpf } for pid=3605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.202000 audit[3605]: AVC avc: denied { confidentiality } for pid=3605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.202000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc22438610 a2=94 a3=88 items=0 ppid=3518 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.209000 audit: BPF prog-id=18 op=LOAD Jul 10 00:43:35.209000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1850a7d0 a2=98 a3=1999999999999999 items=0 ppid=3518 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:43:35.210000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit: BPF prog-id=19 op=LOAD Jul 10 00:43:35.210000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1850a6b0 a2=94 a3=ffff items=0 ppid=3518 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:43:35.210000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.210000 audit: BPF prog-id=20 op=LOAD Jul 10 00:43:35.210000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1850a6f0 a2=94 a3=7ffc1850a8d0 items=0 ppid=3518 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:43:35.210000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:43:35.322084 kubelet[2121]: I0710 00:43:35.322034 2121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506b8ef0-d513-4ac6-984e-8cdac4618a2c" path="/var/lib/kubelet/pods/506b8ef0-d513-4ac6-984e-8cdac4618a2c/volumes" Jul 10 00:43:35.340385 systemd-networkd[1071]: vxlan.calico: Link UP Jul 10 00:43:35.340393 systemd-networkd[1071]: vxlan.calico: Gained carrier Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit: BPF prog-id=21 op=LOAD Jul 10 00:43:35.358000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9badb400 a2=98 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.358000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.358000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit: BPF prog-id=22 op=LOAD Jul 10 00:43:35.358000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9badb210 a2=94 a3=54428f items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.358000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.358000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.358000 audit: BPF prog-id=23 op=LOAD Jul 10 00:43:35.358000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9badb240 a2=94 a3=2 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.358000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb110 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9badb140 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9badb050 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb160 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb140 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb130 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb160 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9badb140 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9badb160 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9badb130 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9badb1a0 a2=28 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit: BPF prog-id=24 op=LOAD Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9badb010 a2=94 a3=0 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd9badb000 a2=50 a3=2800 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd9badb000 a2=50 a3=2800 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit: BPF prog-id=25 op=LOAD Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9bada820 a2=94 a3=2 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.359000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { perfmon } for pid=3649 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit[3649]: AVC avc: denied { bpf } for pid=3649 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.359000 audit: BPF prog-id=26 op=LOAD Jul 10 00:43:35.359000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9bada920 a2=94 a3=30 items=0 ppid=3518 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit: BPF prog-id=27 op=LOAD Jul 10 00:43:35.362000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe106b5e80 a2=98 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.362000 audit: BPF prog-id=27 op=UNLOAD Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit: BPF prog-id=28 op=LOAD Jul 10 00:43:35.362000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe106b5c70 a2=94 a3=54428f items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.362000 audit: BPF prog-id=28 op=UNLOAD Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.362000 audit: BPF prog-id=29 op=LOAD Jul 10 00:43:35.362000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe106b5ca0 a2=94 a3=2 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.362000 audit: BPF prog-id=29 op=UNLOAD Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit: BPF prog-id=30 op=LOAD Jul 10 00:43:35.473000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe106b5b60 a2=94 a3=1 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.473000 audit: BPF prog-id=30 op=UNLOAD Jul 10 00:43:35.473000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.473000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe106b5c30 a2=50 a3=7ffe106b5d10 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5b70 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe106b5ba0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe106b5ab0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5bc0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5ba0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5b90 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5bc0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe106b5ba0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe106b5bc0 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe106b5b90 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe106b5c00 a2=28 a3=0 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe106b59b0 a2=50 a3=1 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit: BPF prog-id=31 op=LOAD Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe106b59b0 a2=94 a3=5 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit: BPF prog-id=31 op=UNLOAD Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe106b5a60 a2=50 a3=1 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.483000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.483000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe106b5b80 a2=4 a3=38 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { confidentiality } for pid=3653 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.484000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe106b5bd0 a2=94 a3=6 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { confidentiality } for pid=3653 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.484000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe106b5380 a2=94 a3=88 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { perfmon } for pid=3653 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.484000 audit[3653]: AVC avc: denied { confidentiality } for pid=3653 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:43:35.484000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe106b5380 a2=94 a3=88 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.485000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.485000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe106b6db0 a2=10 a3=208 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.485000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.485000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.485000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe106b6c50 a2=10 a3=3 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.485000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.485000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.485000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe106b6bf0 a2=10 a3=3 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.485000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.485000 audit[3653]: AVC avc: denied { bpf } for pid=3653 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:43:35.485000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe106b6bf0 a2=10 a3=7 items=0 ppid=3518 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.485000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:43:35.492000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:43:35.539000 audit[3687]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3687 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:35.539000 audit[3687]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff41c4a170 a2=0 a3=7fff41c4a15c items=0 ppid=3518 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.539000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:35.544000 audit[3685]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=3685 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:35.544000 audit[3685]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffeb4230590 a2=0 a3=7ffeb423057c items=0 ppid=3518 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.544000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:35.547000 audit[3684]: NETFILTER_CFG table=raw:101 family=2 entries=21 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:35.547000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe88084f60 a2=0 a3=7ffe88084f4c items=0 ppid=3518 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.547000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:35.553000 audit[3689]: NETFILTER_CFG table=filter:102 family=2 entries=94 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:35.553000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffebb6ccd40 a2=0 a3=7ffebb6ccd2c items=0 ppid=3518 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.553000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:35.676203 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:35614.service. Jul 10 00:43:35.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.99:22-10.0.0.1:35614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:35.721000 audit[3701]: USER_ACCT pid=3701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.722953 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:35.722000 audit[3701]: CRED_ACQ pid=3701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.722000 audit[3701]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe19755710 a2=3 a3=0 items=0 ppid=1 pid=3701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:35.722000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:35.724216 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:35.728557 systemd-logind[1287]: New session 8 of user core. Jul 10 00:43:35.729459 systemd[1]: Started session-8.scope. Jul 10 00:43:35.732000 audit[3701]: USER_START pid=3701 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.734000 audit[3704]: CRED_ACQ pid=3704 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.851940 sshd[3701]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:35.851000 audit[3701]: USER_END pid=3701 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.851000 audit[3701]: CRED_DISP pid=3701 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:35.854162 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:35614.service: Deactivated successfully. Jul 10 00:43:35.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.99:22-10.0.0.1:35614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:35.855143 systemd-logind[1287]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:43:35.855170 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:43:35.855917 systemd-logind[1287]: Removed session 8. Jul 10 00:43:36.304211 systemd-networkd[1071]: cali56a6d41a159: Gained IPv6LL Jul 10 00:43:36.356930 env[1307]: time="2025-07-10T00:43:36.356866708Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:36.359390 env[1307]: time="2025-07-10T00:43:36.359357058Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:36.360929 env[1307]: time="2025-07-10T00:43:36.360901676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:36.362357 env[1307]: time="2025-07-10T00:43:36.362316657Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:36.363000 env[1307]: time="2025-07-10T00:43:36.362968063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:43:36.365257 env[1307]: time="2025-07-10T00:43:36.365224239Z" level=info msg="CreateContainer within sandbox \"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:43:36.386233 env[1307]: time="2025-07-10T00:43:36.386168368Z" level=info msg="CreateContainer within sandbox \"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"06582573eba159abd0383535bc9e5f538c3435cae357f96ca026dbaf7bb09bf0\"" Jul 10 00:43:36.386805 env[1307]: time="2025-07-10T00:43:36.386776019Z" level=info msg="StartContainer for \"06582573eba159abd0383535bc9e5f538c3435cae357f96ca026dbaf7bb09bf0\"" Jul 10 00:43:36.397400 kubelet[2121]: I0710 00:43:36.392789 2121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:43:36.491549 env[1307]: time="2025-07-10T00:43:36.491496643Z" level=info msg="StartContainer for \"06582573eba159abd0383535bc9e5f538c3435cae357f96ca026dbaf7bb09bf0\" returns successfully" Jul 10 00:43:36.500103 env[1307]: time="2025-07-10T00:43:36.500052109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:43:36.944004 systemd-networkd[1071]: vxlan.calico: Gained IPv6LL Jul 10 00:43:38.320934 env[1307]: time="2025-07-10T00:43:38.320884570Z" level=info msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" Jul 10 00:43:38.321460 env[1307]: time="2025-07-10T00:43:38.321387684Z" level=info msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" Jul 10 00:43:38.321607 env[1307]: time="2025-07-10T00:43:38.321566181Z" level=info msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" Jul 10 00:43:38.325289 env[1307]: time="2025-07-10T00:43:38.320899759Z" level=info msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" iface="eth0" netns="/var/run/netns/cni-8195a623-5c62-3b52-2ded-0e5af4f36f9d" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" iface="eth0" netns="/var/run/netns/cni-8195a623-5c62-3b52-2ded-0e5af4f36f9d" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" iface="eth0" netns="/var/run/netns/cni-8195a623-5c62-3b52-2ded-0e5af4f36f9d" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.423 [INFO][3867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.451 [INFO][3895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.451 [INFO][3895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.451 [INFO][3895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.461 [WARNING][3895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.461 [INFO][3895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.462 [INFO][3895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:38.470347 env[1307]: 2025-07-10 00:43:38.466 [INFO][3867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:43:38.473506 systemd[1]: run-netns-cni\x2d8195a623\x2d5c62\x2d3b52\x2d2ded\x2d0e5af4f36f9d.mount: Deactivated successfully. Jul 10 00:43:38.474708 env[1307]: time="2025-07-10T00:43:38.474619378Z" level=info msg="TearDown network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" successfully" Jul 10 00:43:38.474708 env[1307]: time="2025-07-10T00:43:38.474703809Z" level=info msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" returns successfully" Jul 10 00:43:38.475672 env[1307]: time="2025-07-10T00:43:38.475614064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lnpm,Uid:3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1,Namespace:calico-system,Attempt:1,}" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.399 [INFO][3845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.399 [INFO][3845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" iface="eth0" netns="/var/run/netns/cni-0acbdb35-3b72-c9f8-7d43-c7921b26c0a4" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.400 [INFO][3845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" iface="eth0" netns="/var/run/netns/cni-0acbdb35-3b72-c9f8-7d43-c7921b26c0a4" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.401 [INFO][3845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" iface="eth0" netns="/var/run/netns/cni-0acbdb35-3b72-c9f8-7d43-c7921b26c0a4" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.401 [INFO][3845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.402 [INFO][3845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.463 [INFO][3883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.463 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.463 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.469 [WARNING][3883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.469 [INFO][3883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.476 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:38.481695 env[1307]: 2025-07-10 00:43:38.480 [INFO][3845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:38.485339 env[1307]: time="2025-07-10T00:43:38.481841012Z" level=info msg="TearDown network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" successfully" Jul 10 00:43:38.485339 env[1307]: time="2025-07-10T00:43:38.481872953Z" level=info msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" returns successfully" Jul 10 00:43:38.485339 env[1307]: time="2025-07-10T00:43:38.482594330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-c8pph,Uid:f2307b87-39c7-43c6-8c91-1f74a3de69ab,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:43:38.484161 systemd[1]: run-netns-cni\x2d0acbdb35\x2d3b72\x2dc9f8\x2d7d43\x2dc7921b26c0a4.mount: Deactivated successfully. Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.410 [INFO][3846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.411 [INFO][3846] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" iface="eth0" netns="/var/run/netns/cni-b465c883-eb9b-fd7d-2b1f-2cc55734e67d" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.411 [INFO][3846] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" iface="eth0" netns="/var/run/netns/cni-b465c883-eb9b-fd7d-2b1f-2cc55734e67d" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.411 [INFO][3846] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" iface="eth0" netns="/var/run/netns/cni-b465c883-eb9b-fd7d-2b1f-2cc55734e67d" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.411 [INFO][3846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.411 [INFO][3846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.467 [INFO][3888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.468 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.477 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.485 [WARNING][3888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.485 [INFO][3888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.486 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:38.491159 env[1307]: 2025-07-10 00:43:38.488 [INFO][3846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:43:38.493914 env[1307]: time="2025-07-10T00:43:38.493858122Z" level=info msg="TearDown network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" successfully" Jul 10 00:43:38.494086 env[1307]: time="2025-07-10T00:43:38.494026402Z" level=info msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" returns successfully" Jul 10 00:43:38.495164 kubelet[2121]: E0710 00:43:38.494665 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:38.495509 systemd[1]: run-netns-cni\x2db465c883\x2deb9b\x2dfd7d\x2d2b1f\x2d2cc55734e67d.mount: Deactivated successfully. Jul 10 00:43:38.497498 env[1307]: time="2025-07-10T00:43:38.497445740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbqqb,Uid:3c49502b-c641-4c73-b4e5-5955ec9166b1,Namespace:kube-system,Attempt:1,}" Jul 10 00:43:38.802672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225145531.mount: Deactivated successfully. Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" iface="eth0" netns="/var/run/netns/cni-9360e4a5-21d6-ba87-f1d7-ab926218894f" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" iface="eth0" netns="/var/run/netns/cni-9360e4a5-21d6-ba87-f1d7-ab926218894f" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" iface="eth0" netns="/var/run/netns/cni-9360e4a5-21d6-ba87-f1d7-ab926218894f" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.415 [INFO][3866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.479 [INFO][3902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.479 [INFO][3902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.486 [INFO][3902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.758 [WARNING][3902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.758 [INFO][3902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.841 [INFO][3902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:38.845626 env[1307]: 2025-07-10 00:43:38.843 [INFO][3866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:43:38.846145 env[1307]: time="2025-07-10T00:43:38.845868736Z" level=info msg="TearDown network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" successfully" Jul 10 00:43:38.846145 env[1307]: time="2025-07-10T00:43:38.845906928Z" level=info msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" returns successfully" Jul 10 00:43:38.846762 env[1307]: time="2025-07-10T00:43:38.846728726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-vm5gv,Uid:fae28b13-a385-46eb-8a07-d49af21f8b28,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:43:38.848359 systemd[1]: run-netns-cni\x2d9360e4a5\x2d21d6\x2dba87\x2df1d7\x2dab926218894f.mount: Deactivated successfully. Jul 10 00:43:38.957040 env[1307]: time="2025-07-10T00:43:38.956994087Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:38.959751 env[1307]: time="2025-07-10T00:43:38.959712839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:38.962003 env[1307]: time="2025-07-10T00:43:38.961978782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:38.965075 env[1307]: time="2025-07-10T00:43:38.965040454Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:38.965766 env[1307]: time="2025-07-10T00:43:38.965687861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:43:38.968921 env[1307]: time="2025-07-10T00:43:38.968868959Z" level=info msg="CreateContainer within sandbox \"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:43:38.986920 env[1307]: time="2025-07-10T00:43:38.986851061Z" level=info msg="CreateContainer within sandbox \"8fb2d0fd934808a9fdb5f15139713fe3f567b35fceb7c9c15e6409a3d74749b5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e42cfdd09404482325ee4ed0979767ed07b69f5242043fd2eb3d07ec5118d7de\"" Jul 10 00:43:38.989979 env[1307]: time="2025-07-10T00:43:38.989941226Z" level=info msg="StartContainer for \"e42cfdd09404482325ee4ed0979767ed07b69f5242043fd2eb3d07ec5118d7de\"" Jul 10 00:43:39.083920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:43:39.084051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0260d0fff8a: link becomes ready Jul 10 00:43:39.083293 systemd-networkd[1071]: cali0260d0fff8a: Link UP Jul 10 00:43:39.085152 systemd-networkd[1071]: cali0260d0fff8a: Gained carrier Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:38.987 [INFO][3917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0 calico-apiserver-646c7495cd- calico-apiserver f2307b87-39c7-43c6-8c91-1f74a3de69ab 1008 0 2025-07-10 00:43:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646c7495cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-646c7495cd-c8pph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0260d0fff8a [] [] }} ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:38.987 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.038 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" HandleID="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.038 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" HandleID="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-646c7495cd-c8pph", "timestamp":"2025-07-10 00:43:39.038042799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.038 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.038 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.038 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.047 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.052 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.057 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.059 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.060 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.060 [INFO][3980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.062 [INFO][3980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.069 [INFO][3980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.076 [INFO][3980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.076 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" host="localhost" Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.076 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:39.096901 env[1307]: 2025-07-10 00:43:39.076 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" HandleID="k8s-pod-network.ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.079 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2307b87-39c7-43c6-8c91-1f74a3de69ab", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-646c7495cd-c8pph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0260d0fff8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.079 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.079 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0260d0fff8a ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.083 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.085 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2307b87-39c7-43c6-8c91-1f74a3de69ab", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb", Pod:"calico-apiserver-646c7495cd-c8pph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0260d0fff8a", MAC:"d6:8f:3b:01:ee:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.097616 env[1307]: 2025-07-10 00:43:39.093 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-c8pph" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:39.245717 env[1307]: time="2025-07-10T00:43:39.245629837Z" level=info msg="StartContainer for \"e42cfdd09404482325ee4ed0979767ed07b69f5242043fd2eb3d07ec5118d7de\" returns successfully" Jul 10 00:43:39.244000 audit[4062]: NETFILTER_CFG table=filter:103 family=2 entries=50 op=nft_register_chain pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:39.244000 audit[4062]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff798d3470 a2=0 a3=7fff798d345c items=0 ppid=3518 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.244000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:39.252898 env[1307]: time="2025-07-10T00:43:39.252838874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:39.253114 env[1307]: time="2025-07-10T00:43:39.253078970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:39.253221 env[1307]: time="2025-07-10T00:43:39.253197043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:39.253496 env[1307]: time="2025-07-10T00:43:39.253463658Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb pid=4071 runtime=io.containerd.runc.v2 Jul 10 00:43:39.274197 systemd-networkd[1071]: cali291442b6c06: Link UP Jul 10 00:43:39.276263 systemd-networkd[1071]: cali291442b6c06: Gained carrier Jul 10 00:43:39.276773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali291442b6c06: link becomes ready Jul 10 00:43:39.280907 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.038 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8lnpm-eth0 csi-node-driver- calico-system 3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1 1011 0 2025-07-10 00:43:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8lnpm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali291442b6c06 [] [] }} ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.038 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.107 [INFO][4010] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" HandleID="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.107 [INFO][4010] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" HandleID="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b6cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8lnpm", "timestamp":"2025-07-10 00:43:39.107694606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.108 [INFO][4010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.108 [INFO][4010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.108 [INFO][4010] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.145 [INFO][4010] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.172 [INFO][4010] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.243 [INFO][4010] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.246 [INFO][4010] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.250 [INFO][4010] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.250 [INFO][4010] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.252 [INFO][4010] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986 Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.258 [INFO][4010] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.266 [INFO][4010] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.266 [INFO][4010] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" host="localhost" Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.266 [INFO][4010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:39.291068 env[1307]: 2025-07-10 00:43:39.266 [INFO][4010] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" HandleID="k8s-pod-network.4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.269 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lnpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8lnpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali291442b6c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.270 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.270 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali291442b6c06 ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.277 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.277 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lnpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986", Pod:"csi-node-driver-8lnpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali291442b6c06", MAC:"76:66:34:c8:10:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.291882 env[1307]: 2025-07-10 00:43:39.288 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986" Namespace="calico-system" Pod="csi-node-driver-8lnpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:43:39.306574 env[1307]: time="2025-07-10T00:43:39.306497181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:39.306825 env[1307]: time="2025-07-10T00:43:39.306786750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:39.306952 env[1307]: time="2025-07-10T00:43:39.306922426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:39.308134 env[1307]: time="2025-07-10T00:43:39.307218558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986 pid=4113 runtime=io.containerd.runc.v2 Jul 10 00:43:39.313000 audit[4130]: NETFILTER_CFG table=filter:104 family=2 entries=40 op=nft_register_chain pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:39.313000 audit[4130]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fff715f1fb0 a2=0 a3=7fff715f1f9c items=0 ppid=3518 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.313000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:39.319025 env[1307]: time="2025-07-10T00:43:39.318977546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-c8pph,Uid:f2307b87-39c7-43c6-8c91-1f74a3de69ab,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb\"" Jul 10 00:43:39.322945 env[1307]: time="2025-07-10T00:43:39.322872764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:43:39.336930 systemd-networkd[1071]: cali3c8505c1030: Link UP Jul 10 00:43:39.341695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3c8505c1030: link becomes ready Jul 10 00:43:39.342183 systemd-networkd[1071]: cali3c8505c1030: Gained carrier Jul 10 00:43:39.352556 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.030 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0 coredns-7c65d6cfc9- kube-system 3c49502b-c641-4c73-b4e5-5955ec9166b1 1009 0 2025-07-10 00:43:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-nbqqb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c8505c1030 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.031 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.120 [INFO][4009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" HandleID="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.120 [INFO][4009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" HandleID="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e960), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-nbqqb", "timestamp":"2025-07-10 00:43:39.120167336 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.120 [INFO][4009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.266 [INFO][4009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.266 [INFO][4009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.274 [INFO][4009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.290 [INFO][4009] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.308 [INFO][4009] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.309 [INFO][4009] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.311 [INFO][4009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.311 [INFO][4009] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.313 [INFO][4009] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561 Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.318 [INFO][4009] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.327 [INFO][4009] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.327 [INFO][4009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" host="localhost" Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.327 [INFO][4009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:39.361828 env[1307]: 2025-07-10 00:43:39.327 [INFO][4009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" HandleID="k8s-pod-network.c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.331 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c49502b-c641-4c73-b4e5-5955ec9166b1", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-nbqqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c8505c1030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.331 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.331 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c8505c1030 ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.342 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.347 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c49502b-c641-4c73-b4e5-5955ec9166b1", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561", Pod:"coredns-7c65d6cfc9-nbqqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c8505c1030", MAC:"ea:c4:36:fc:c3:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.362469 env[1307]: 2025-07-10 00:43:39.359 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nbqqb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:43:39.371243 env[1307]: time="2025-07-10T00:43:39.369997293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lnpm,Uid:3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986\"" Jul 10 00:43:39.375000 audit[4169]: NETFILTER_CFG table=filter:105 family=2 entries=50 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:39.375000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffddd407ed0 a2=0 a3=7ffddd407ebc items=0 ppid=3518 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.375000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:39.378610 env[1307]: time="2025-07-10T00:43:39.378505882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:39.379032 env[1307]: time="2025-07-10T00:43:39.378983538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:39.379170 env[1307]: time="2025-07-10T00:43:39.379141918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:39.379633 env[1307]: time="2025-07-10T00:43:39.379548067Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561 pid=4171 runtime=io.containerd.runc.v2 Jul 10 00:43:39.404293 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:39.435616 systemd-networkd[1071]: cali876d15320d1: Link UP Jul 10 00:43:39.436409 env[1307]: time="2025-07-10T00:43:39.436122749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbqqb,Uid:3c49502b-c641-4c73-b4e5-5955ec9166b1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561\"" Jul 10 00:43:39.438468 kubelet[2121]: E0710 00:43:39.436969 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:39.439301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali876d15320d1: link becomes ready Jul 10 00:43:39.438711 systemd-networkd[1071]: cali876d15320d1: Gained carrier Jul 10 00:43:39.445722 env[1307]: time="2025-07-10T00:43:39.445647303Z" level=info msg="CreateContainer within sandbox \"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.013 [INFO][3955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0 calico-apiserver-646c7495cd- calico-apiserver fae28b13-a385-46eb-8a07-d49af21f8b28 1010 0 2025-07-10 00:43:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646c7495cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-646c7495cd-vm5gv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali876d15320d1 [] [] }} ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.014 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.121 [INFO][4011] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" HandleID="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.121 [INFO][4011] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" HandleID="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-646c7495cd-vm5gv", "timestamp":"2025-07-10 00:43:39.121019289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.121 [INFO][4011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.328 [INFO][4011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.328 [INFO][4011] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.377 [INFO][4011] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.392 [INFO][4011] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.405 [INFO][4011] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.410 [INFO][4011] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.412 [INFO][4011] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.413 [INFO][4011] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.414 [INFO][4011] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127 Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.418 [INFO][4011] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.428 [INFO][4011] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.428 [INFO][4011] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" host="localhost" Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.428 [INFO][4011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:39.458694 env[1307]: 2025-07-10 00:43:39.428 [INFO][4011] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" HandleID="k8s-pod-network.f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.430 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"fae28b13-a385-46eb-8a07-d49af21f8b28", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-646c7495cd-vm5gv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876d15320d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.430 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.431 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali876d15320d1 ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.444 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.445 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"fae28b13-a385-46eb-8a07-d49af21f8b28", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127", Pod:"calico-apiserver-646c7495cd-vm5gv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876d15320d1", MAC:"ae:9f:7e:11:d3:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:39.459485 env[1307]: 2025-07-10 00:43:39.453 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127" Namespace="calico-apiserver" Pod="calico-apiserver-646c7495cd-vm5gv" WorkloadEndpoint="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:43:39.470000 audit[4212]: NETFILTER_CFG table=filter:106 family=2 entries=49 op=nft_register_chain pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:39.470000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffe3870ca10 a2=0 a3=7ffe3870c9fc items=0 ppid=3518 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.470000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:39.478297 env[1307]: time="2025-07-10T00:43:39.478251833Z" level=info msg="CreateContainer within sandbox \"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e86f976fa6978301a4ea5fa7c39a1b02bb967606a2cc3cd401ccfb13638a26b\"" Jul 10 00:43:39.482987 env[1307]: time="2025-07-10T00:43:39.482952479Z" level=info msg="StartContainer for \"9e86f976fa6978301a4ea5fa7c39a1b02bb967606a2cc3cd401ccfb13638a26b\"" Jul 10 00:43:39.484789 env[1307]: time="2025-07-10T00:43:39.477131561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:39.484789 env[1307]: time="2025-07-10T00:43:39.477198247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:39.484789 env[1307]: time="2025-07-10T00:43:39.477213716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:39.484789 env[1307]: time="2025-07-10T00:43:39.477482937Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127 pid=4219 runtime=io.containerd.runc.v2 Jul 10 00:43:39.511979 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:39.542536 env[1307]: time="2025-07-10T00:43:39.542484803Z" level=info msg="StartContainer for \"9e86f976fa6978301a4ea5fa7c39a1b02bb967606a2cc3cd401ccfb13638a26b\" returns successfully" Jul 10 00:43:39.544700 env[1307]: time="2025-07-10T00:43:39.543235115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c7495cd-vm5gv,Uid:fae28b13-a385-46eb-8a07-d49af21f8b28,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127\"" Jul 10 00:43:39.689957 kubelet[2121]: E0710 00:43:39.689916 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:39.752247 kubelet[2121]: I0710 00:43:39.752160 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nbqqb" podStartSLOduration=35.752135601 podStartE2EDuration="35.752135601s" podCreationTimestamp="2025-07-10 00:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:39.721501334 +0000 UTC m=+40.511881188" watchObservedRunningTime="2025-07-10 00:43:39.752135601 +0000 UTC m=+40.542515465" Jul 10 00:43:39.760000 audit[4294]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:39.760000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1c726b70 a2=0 a3=7fff1c726b5c items=0 ppid=2287 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:39.766000 audit[4294]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:39.766000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff1c726b70 a2=0 a3=0 items=0 ppid=2287 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:39.779000 audit[4296]: NETFILTER_CFG table=filter:109 family=2 entries=19 op=nft_register_rule pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:39.779000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffda8bffe00 a2=0 a3=7ffda8bffdec items=0 ppid=2287 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:39.784000 audit[4296]: NETFILTER_CFG table=nat:110 family=2 entries=21 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:39.784000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffda8bffe00 a2=0 a3=7ffda8bffdec items=0 ppid=2287 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:39.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:40.335843 systemd-networkd[1071]: cali0260d0fff8a: Gained IPv6LL Jul 10 00:43:40.655859 systemd-networkd[1071]: cali291442b6c06: Gained IPv6LL Jul 10 00:43:40.700465 kubelet[2121]: E0710 00:43:40.700435 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:40.739049 kubelet[2121]: I0710 00:43:40.738980 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6564ff4954-6ghpb" podStartSLOduration=3.117561813 podStartE2EDuration="7.738960507s" podCreationTimestamp="2025-07-10 00:43:33 +0000 UTC" firstStartedPulling="2025-07-10 00:43:34.345894499 +0000 UTC m=+35.136274363" lastFinishedPulling="2025-07-10 00:43:38.967293193 +0000 UTC m=+39.757673057" observedRunningTime="2025-07-10 00:43:39.753813019 +0000 UTC m=+40.544192893" watchObservedRunningTime="2025-07-10 00:43:40.738960507 +0000 UTC m=+41.529340371" Jul 10 00:43:40.854849 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:43066.service. Jul 10 00:43:40.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.99:22-10.0.0.1:43066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:40.855879 kernel: kauditd_printk_skb: 582 callbacks suppressed Jul 10 00:43:40.855935 kernel: audit: type=1130 audit(1752108220.853:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.99:22-10.0.0.1:43066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:40.899000 audit[4305]: NETFILTER_CFG table=filter:111 family=2 entries=15 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:40.899000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe0e0358f0 a2=0 a3=7ffe0e0358dc items=0 ppid=2287 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:40.908463 kernel: audit: type=1325 audit(1752108220.899:411): table=filter:111 family=2 entries=15 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:40.908520 kernel: audit: type=1300 audit(1752108220.899:411): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe0e0358f0 a2=0 a3=7ffe0e0358dc items=0 ppid=2287 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:40.908556 kernel: audit: type=1327 audit(1752108220.899:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:40.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:40.913000 audit[4305]: NETFILTER_CFG table=nat:112 family=2 entries=37 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:40.913000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffe0e0358f0 a2=0 a3=7ffe0e0358dc items=0 ppid=2287 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:40.922561 kernel: audit: type=1325 audit(1752108220.913:412): table=nat:112 family=2 entries=37 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:40.922621 kernel: audit: type=1300 audit(1752108220.913:412): arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffe0e0358f0 a2=0 a3=7ffe0e0358dc items=0 ppid=2287 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:40.922646 kernel: audit: type=1327 audit(1752108220.913:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:40.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:40.933000 audit[4303]: USER_ACCT pid=4303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:40.935069 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:40.937000 audit[4303]: CRED_ACQ pid=4303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:40.939286 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:40.943219 kernel: audit: type=1101 audit(1752108220.933:413): pid=4303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:40.943268 kernel: audit: type=1103 audit(1752108220.937:414): pid=4303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:40.943308 kernel: audit: type=1006 audit(1752108220.937:415): pid=4303 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 10 00:43:40.944459 systemd[1]: Started session-9.scope. Jul 10 00:43:40.937000 audit[4303]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7b39340 a2=3 a3=0 items=0 ppid=1 pid=4303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:40.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:40.944879 systemd-logind[1287]: New session 9 of user core. Jul 10 00:43:40.948000 audit[4303]: USER_START pid=4303 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:40.950000 audit[4309]: CRED_ACQ pid=4309 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:41.078508 sshd[4303]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:41.078000 audit[4303]: USER_END pid=4303 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:41.078000 audit[4303]: CRED_DISP pid=4303 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:41.081108 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:43066.service: Deactivated successfully. Jul 10 00:43:41.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.99:22-10.0.0.1:43066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:41.082335 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:43:41.082824 systemd-logind[1287]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:43:41.083582 systemd-logind[1287]: Removed session 9. Jul 10 00:43:41.242326 systemd-networkd[1071]: cali3c8505c1030: Gained IPv6LL Jul 10 00:43:41.320592 env[1307]: time="2025-07-10T00:43:41.320550617Z" level=info msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" Jul 10 00:43:41.321239 env[1307]: time="2025-07-10T00:43:41.320927762Z" level=info msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" iface="eth0" netns="/var/run/netns/cni-3f091dd9-0c43-af81-bb9e-5a264b440498" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" iface="eth0" netns="/var/run/netns/cni-3f091dd9-0c43-af81-bb9e-5a264b440498" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" iface="eth0" netns="/var/run/netns/cni-3f091dd9-0c43-af81-bb9e-5a264b440498" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.380 [INFO][4346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.409 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.410 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.410 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.416 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.416 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.418 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:41.421113 env[1307]: 2025-07-10 00:43:41.419 [INFO][4346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:43:41.426619 systemd[1]: run-netns-cni\x2d3f091dd9\x2d0c43\x2daf81\x2dbb9e\x2d5a264b440498.mount: Deactivated successfully. Jul 10 00:43:41.428897 env[1307]: time="2025-07-10T00:43:41.428856267Z" level=info msg="TearDown network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" successfully" Jul 10 00:43:41.429018 env[1307]: time="2025-07-10T00:43:41.428996223Z" level=info msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" returns successfully" Jul 10 00:43:41.430005 env[1307]: time="2025-07-10T00:43:41.429950489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5prrt,Uid:e01c7c4e-eca5-4812-95c5-200d95d24a32,Namespace:calico-system,Attempt:1,}" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.374 [INFO][4341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.374 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" iface="eth0" netns="/var/run/netns/cni-4355561c-e21e-91d1-d895-cc1fe4d9f14c" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.375 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" iface="eth0" netns="/var/run/netns/cni-4355561c-e21e-91d1-d895-cc1fe4d9f14c" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.375 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" iface="eth0" netns="/var/run/netns/cni-4355561c-e21e-91d1-d895-cc1fe4d9f14c" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.375 [INFO][4341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.375 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.420 [INFO][4361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.420 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.420 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.426 [WARNING][4361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.427 [INFO][4361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.430 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:41.433865 env[1307]: 2025-07-10 00:43:41.432 [INFO][4341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:43:41.438725 systemd[1]: run-netns-cni\x2d4355561c\x2de21e\x2d91d1\x2dd895\x2dcc1fe4d9f14c.mount: Deactivated successfully. Jul 10 00:43:41.439560 env[1307]: time="2025-07-10T00:43:41.439508102Z" level=info msg="TearDown network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" successfully" Jul 10 00:43:41.439780 env[1307]: time="2025-07-10T00:43:41.439744329Z" level=info msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" returns successfully" Jul 10 00:43:41.440182 kubelet[2121]: E0710 00:43:41.440157 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:41.440873 env[1307]: time="2025-07-10T00:43:41.440829053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlrk9,Uid:f04b4ccc-80be-498b-a53a-b961975a280d,Namespace:kube-system,Attempt:1,}" Jul 10 00:43:41.487869 systemd-networkd[1071]: cali876d15320d1: Gained IPv6LL Jul 10 00:43:41.702867 kubelet[2121]: E0710 00:43:41.702830 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:42.293295 systemd-networkd[1071]: cali05471a71818: Link UP Jul 10 00:43:42.294414 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:43:42.294483 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali05471a71818: link becomes ready Jul 10 00:43:42.294586 systemd-networkd[1071]: cali05471a71818: Gained carrier Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.179 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--5prrt-eth0 goldmane-58fd7646b9- calico-system e01c7c4e-eca5-4812-95c5-200d95d24a32 1072 0 2025-07-10 00:43:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-5prrt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali05471a71818 [] [] }} ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.179 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.208 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" HandleID="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.208 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" HandleID="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-5prrt", "timestamp":"2025-07-10 00:43:42.208707675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.208 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.209 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.209 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.216 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.221 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.224 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.226 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.228 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.228 [INFO][4391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.229 [INFO][4391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07 Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.281 [INFO][4391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.288 [INFO][4391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.288 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" host="localhost" Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.288 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:42.309647 env[1307]: 2025-07-10 00:43:42.288 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" HandleID="k8s-pod-network.c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.290 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5prrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e01c7c4e-eca5-4812-95c5-200d95d24a32", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-5prrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05471a71818", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.290 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.290 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05471a71818 ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.294 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.295 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5prrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e01c7c4e-eca5-4812-95c5-200d95d24a32", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07", Pod:"goldmane-58fd7646b9-5prrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05471a71818", MAC:"9e:d4:13:11:93:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.310555 env[1307]: 2025-07-10 00:43:42.307 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07" Namespace="calico-system" Pod="goldmane-58fd7646b9-5prrt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:43:42.320319 env[1307]: time="2025-07-10T00:43:42.320283662Z" level=info msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" Jul 10 00:43:42.332000 audit[4426]: NETFILTER_CFG table=filter:113 family=2 entries=66 op=nft_register_chain pid=4426 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:42.332000 audit[4426]: SYSCALL arch=c000003e syscall=46 success=yes exit=32784 a0=3 a1=7ffd1630a8a0 a2=0 a3=7ffd1630a88c items=0 ppid=3518 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:42.332000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:42.431134 env[1307]: time="2025-07-10T00:43:42.431090712Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" iface="eth0" netns="/var/run/netns/cni-242f90c8-d1f1-9272-c984-815241961444" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" iface="eth0" netns="/var/run/netns/cni-242f90c8-d1f1-9272-c984-815241961444" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" iface="eth0" netns="/var/run/netns/cni-242f90c8-d1f1-9272-c984-815241961444" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.387 [INFO][4421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.412 [INFO][4443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.412 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.413 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.431 [WARNING][4443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.431 [INFO][4443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.432 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:42.435907 env[1307]: 2025-07-10 00:43:42.434 [INFO][4421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:43:42.436382 env[1307]: time="2025-07-10T00:43:42.436263788Z" level=info msg="TearDown network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" successfully" Jul 10 00:43:42.436382 env[1307]: time="2025-07-10T00:43:42.436295228Z" level=info msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" returns successfully" Jul 10 00:43:42.437057 env[1307]: time="2025-07-10T00:43:42.437025691Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:42.437223 env[1307]: time="2025-07-10T00:43:42.437185784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d58fb4f9-n2p7m,Uid:b28fe5df-eb28-446d-ba1a-53a53e95c947,Namespace:calico-system,Attempt:1,}" Jul 10 00:43:42.438810 systemd[1]: run-netns-cni\x2d242f90c8\x2dd1f1\x2d9272\x2dc984\x2d815241961444.mount: Deactivated successfully. Jul 10 00:43:42.439292 env[1307]: time="2025-07-10T00:43:42.439270903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:42.443312 env[1307]: time="2025-07-10T00:43:42.443122888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:42.443312 env[1307]: time="2025-07-10T00:43:42.443169145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:42.443312 env[1307]: time="2025-07-10T00:43:42.443179255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:42.443459 env[1307]: time="2025-07-10T00:43:42.443366589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07 pid=4460 runtime=io.containerd.runc.v2 Jul 10 00:43:42.444859 env[1307]: time="2025-07-10T00:43:42.444796077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:42.445041 env[1307]: time="2025-07-10T00:43:42.445006965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:43:42.446778 env[1307]: time="2025-07-10T00:43:42.446733305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:43:42.447645 env[1307]: time="2025-07-10T00:43:42.447626246Z" level=info msg="CreateContainer within sandbox \"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:43:42.469163 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:42.498322 env[1307]: time="2025-07-10T00:43:42.498272868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5prrt,Uid:e01c7c4e-eca5-4812-95c5-200d95d24a32,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07\"" Jul 10 00:43:42.542675 systemd-networkd[1071]: cali34eb155d9a1: Link UP Jul 10 00:43:42.545803 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali34eb155d9a1: link becomes ready Jul 10 00:43:42.544152 systemd-networkd[1071]: cali34eb155d9a1: Gained carrier Jul 10 00:43:42.554548 env[1307]: time="2025-07-10T00:43:42.554473006Z" level=info msg="CreateContainer within sandbox \"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95638b0190369ae51abc0ff8dc54bbfb7f937b397d9295249515146a168eb3e1\"" Jul 10 00:43:42.557796 env[1307]: time="2025-07-10T00:43:42.557741446Z" level=info msg="StartContainer for \"95638b0190369ae51abc0ff8dc54bbfb7f937b397d9295249515146a168eb3e1\"" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.433 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0 coredns-7c65d6cfc9- kube-system f04b4ccc-80be-498b-a53a-b961975a280d 1071 0 2025-07-10 00:43:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hlrk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34eb155d9a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.434 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.468 [INFO][4462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" HandleID="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.469 [INFO][4462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" HandleID="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hlrk9", "timestamp":"2025-07-10 00:43:42.468886597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.469 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.469 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.469 [INFO][4462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.477 [INFO][4462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.482 [INFO][4462] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.487 [INFO][4462] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.489 [INFO][4462] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.491 [INFO][4462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.492 [INFO][4462] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.493 [INFO][4462] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776 Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.530 [INFO][4462] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.537 [INFO][4462] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.538 [INFO][4462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" host="localhost" Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.538 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:42.562535 env[1307]: 2025-07-10 00:43:42.538 [INFO][4462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" HandleID="k8s-pod-network.865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.540 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f04b4ccc-80be-498b-a53a-b961975a280d", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hlrk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34eb155d9a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.540 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.540 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34eb155d9a1 ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.544 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.546 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f04b4ccc-80be-498b-a53a-b961975a280d", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776", Pod:"coredns-7c65d6cfc9-hlrk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34eb155d9a1", MAC:"6e:b1:26:47:f8:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.563197 env[1307]: 2025-07-10 00:43:42.560 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlrk9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:43:42.582000 audit[4544]: NETFILTER_CFG table=filter:114 family=2 entries=54 op=nft_register_chain pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:42.582000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=25556 a0=3 a1=7ffe89dd18f0 a2=0 a3=7ffe89dd18dc items=0 ppid=3518 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:42.582000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:42.595880 env[1307]: time="2025-07-10T00:43:42.595796272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:42.596106 env[1307]: time="2025-07-10T00:43:42.596080350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:42.596202 env[1307]: time="2025-07-10T00:43:42.596177784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:42.596460 env[1307]: time="2025-07-10T00:43:42.596433228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776 pid=4556 runtime=io.containerd.runc.v2 Jul 10 00:43:42.627436 env[1307]: time="2025-07-10T00:43:42.627374723Z" level=info msg="StartContainer for \"95638b0190369ae51abc0ff8dc54bbfb7f937b397d9295249515146a168eb3e1\" returns successfully" Jul 10 00:43:42.629250 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:42.658802 env[1307]: time="2025-07-10T00:43:42.658748027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlrk9,Uid:f04b4ccc-80be-498b-a53a-b961975a280d,Namespace:kube-system,Attempt:1,} returns sandbox id \"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776\"" Jul 10 00:43:42.660320 kubelet[2121]: E0710 00:43:42.660258 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:42.664634 env[1307]: time="2025-07-10T00:43:42.664601642Z" level=info msg="CreateContainer within sandbox \"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:43:42.684797 env[1307]: time="2025-07-10T00:43:42.684733968Z" level=info msg="CreateContainer within sandbox \"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdfe568a387c59eaaae2706a751e0aa54501316c15aa1076926cfb19b53df3e4\"" Jul 10 00:43:42.685707 env[1307]: time="2025-07-10T00:43:42.685685469Z" level=info msg="StartContainer for \"bdfe568a387c59eaaae2706a751e0aa54501316c15aa1076926cfb19b53df3e4\"" Jul 10 00:43:42.690116 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa8f2648780: link becomes ready Jul 10 00:43:42.689754 systemd-networkd[1071]: caliaa8f2648780: Link UP Jul 10 00:43:42.689950 systemd-networkd[1071]: caliaa8f2648780: Gained carrier Jul 10 00:43:42.711000 audit[4636]: NETFILTER_CFG table=filter:115 family=2 entries=52 op=nft_register_chain pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:43:42.711000 audit[4636]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffe48d3aa60 a2=0 a3=7ffe48d3aa4c items=0 ppid=3518 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:42.711000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.602 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0 calico-kube-controllers-d58fb4f9- calico-system b28fe5df-eb28-446d-ba1a-53a53e95c947 1080 0 2025-07-10 00:43:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d58fb4f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d58fb4f9-n2p7m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa8f2648780 [] [] }} ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.602 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.635 [INFO][4571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" HandleID="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.635 [INFO][4571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" HandleID="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d58fb4f9-n2p7m", "timestamp":"2025-07-10 00:43:42.635521772 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.635 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.635 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.635 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.642 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.646 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.655 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.662 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.665 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.665 [INFO][4571] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.667 [INFO][4571] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8 Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.671 [INFO][4571] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.680 [INFO][4571] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.681 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" host="localhost" Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.681 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:42.718177 env[1307]: 2025-07-10 00:43:42.681 [INFO][4571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" HandleID="k8s-pod-network.433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.683 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0", GenerateName:"calico-kube-controllers-d58fb4f9-", Namespace:"calico-system", SelfLink:"", UID:"b28fe5df-eb28-446d-ba1a-53a53e95c947", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d58fb4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d58fb4f9-n2p7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa8f2648780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.684 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.684 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa8f2648780 ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.691 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.691 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0", GenerateName:"calico-kube-controllers-d58fb4f9-", Namespace:"calico-system", SelfLink:"", UID:"b28fe5df-eb28-446d-ba1a-53a53e95c947", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d58fb4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8", Pod:"calico-kube-controllers-d58fb4f9-n2p7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa8f2648780", MAC:"4e:2e:9d:c6:7a:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:42.718729 env[1307]: 2025-07-10 00:43:42.697 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8" Namespace="calico-system" Pod="calico-kube-controllers-d58fb4f9-n2p7m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:43:42.719770 kubelet[2121]: E0710 00:43:42.718564 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:42.727000 audit[4651]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:42.727000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd0a193410 a2=0 a3=7ffd0a1933fc items=0 ppid=2287 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:42.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:42.732000 audit[4651]: NETFILTER_CFG table=nat:117 family=2 entries=22 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:42.732000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd0a193410 a2=0 a3=7ffd0a1933fc items=0 ppid=2287 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:42.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:42.739543 env[1307]: time="2025-07-10T00:43:42.739479521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:43:42.739647 env[1307]: time="2025-07-10T00:43:42.739527262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:43:42.739647 env[1307]: time="2025-07-10T00:43:42.739537701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:43:42.740146 env[1307]: time="2025-07-10T00:43:42.739845535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8 pid=4661 runtime=io.containerd.runc.v2 Jul 10 00:43:42.748249 env[1307]: time="2025-07-10T00:43:42.748166242Z" level=info msg="StartContainer for \"bdfe568a387c59eaaae2706a751e0aa54501316c15aa1076926cfb19b53df3e4\" returns successfully" Jul 10 00:43:42.768692 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:43:42.802929 env[1307]: time="2025-07-10T00:43:42.802810043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d58fb4f9-n2p7m,Uid:b28fe5df-eb28-446d-ba1a-53a53e95c947,Namespace:calico-system,Attempt:1,} returns sandbox id \"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8\"" Jul 10 00:43:43.392457 kubelet[2121]: I0710 00:43:43.392393 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-646c7495cd-c8pph" podStartSLOduration=27.268529047 podStartE2EDuration="30.392371143s" podCreationTimestamp="2025-07-10 00:43:13 +0000 UTC" firstStartedPulling="2025-07-10 00:43:39.322405429 +0000 UTC m=+40.112785293" lastFinishedPulling="2025-07-10 00:43:42.446247525 +0000 UTC m=+43.236627389" observedRunningTime="2025-07-10 00:43:42.716451863 +0000 UTC m=+43.506831747" watchObservedRunningTime="2025-07-10 00:43:43.392371143 +0000 UTC m=+44.182751007" Jul 10 00:43:43.405000 audit[4713]: NETFILTER_CFG table=filter:118 family=2 entries=11 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:43.405000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffa9497fb0 a2=0 a3=7fffa9497f9c items=0 ppid=2287 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:43.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:43.411000 audit[4713]: NETFILTER_CFG table=nat:119 family=2 entries=29 op=nft_register_chain pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:43.411000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fffa9497fb0 a2=0 a3=7fffa9497f9c items=0 ppid=2287 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:43.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:43.723460 kubelet[2121]: E0710 00:43:43.723424 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:43.983850 systemd-networkd[1071]: cali34eb155d9a1: Gained IPv6LL Jul 10 00:43:44.303900 systemd-networkd[1071]: cali05471a71818: Gained IPv6LL Jul 10 00:43:44.320000 audit[4715]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:44.320000 audit[4715]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd46365070 a2=0 a3=7ffd4636505c items=0 ppid=2287 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:44.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:44.326000 audit[4715]: NETFILTER_CFG table=nat:121 family=2 entries=48 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:44.326000 audit[4715]: SYSCALL arch=c000003e syscall=46 success=yes exit=15732 a0=3 a1=7ffd46365070 a2=0 a3=7ffd4636505c items=0 ppid=2287 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:44.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:44.559888 systemd-networkd[1071]: caliaa8f2648780: Gained IPv6LL Jul 10 00:43:44.725703 kubelet[2121]: E0710 00:43:44.725605 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:44.737009 kubelet[2121]: I0710 00:43:44.736926 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hlrk9" podStartSLOduration=40.73690202 podStartE2EDuration="40.73690202s" podCreationTimestamp="2025-07-10 00:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:43:44.228748921 +0000 UTC m=+45.019128785" watchObservedRunningTime="2025-07-10 00:43:44.73690202 +0000 UTC m=+45.527281874" Jul 10 00:43:44.993410 env[1307]: time="2025-07-10T00:43:44.993354514Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:44.995690 env[1307]: time="2025-07-10T00:43:44.995624001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:44.997331 env[1307]: time="2025-07-10T00:43:44.997286017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:44.998962 env[1307]: time="2025-07-10T00:43:44.998928356Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:44.999358 env[1307]: time="2025-07-10T00:43:44.999318565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:43:45.000374 env[1307]: time="2025-07-10T00:43:45.000351471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:43:45.001256 env[1307]: time="2025-07-10T00:43:45.001202222Z" level=info msg="CreateContainer within sandbox \"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:43:45.017522 env[1307]: time="2025-07-10T00:43:45.017477442Z" level=info msg="CreateContainer within sandbox \"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"73270b33e48a530933d76a3acbeed6b981722f1851560d5a71da300e2bd0af43\"" Jul 10 00:43:45.017999 env[1307]: time="2025-07-10T00:43:45.017965777Z" level=info msg="StartContainer for \"73270b33e48a530933d76a3acbeed6b981722f1851560d5a71da300e2bd0af43\"" Jul 10 00:43:45.068507 env[1307]: time="2025-07-10T00:43:45.068447104Z" level=info msg="StartContainer for \"73270b33e48a530933d76a3acbeed6b981722f1851560d5a71da300e2bd0af43\" returns successfully" Jul 10 00:43:45.346000 audit[4751]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=4751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:45.346000 audit[4751]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffb08efa00 a2=0 a3=7fffb08ef9ec items=0 ppid=2287 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:45.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:45.361000 audit[4751]: NETFILTER_CFG table=nat:123 family=2 entries=60 op=nft_register_chain pid=4751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:45.361000 audit[4751]: SYSCALL arch=c000003e syscall=46 success=yes exit=21396 a0=3 a1=7fffb08efa00 a2=0 a3=7fffb08ef9ec items=0 ppid=2287 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:45.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:45.398824 env[1307]: time="2025-07-10T00:43:45.398643481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:45.400915 env[1307]: time="2025-07-10T00:43:45.400869664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:45.402862 env[1307]: time="2025-07-10T00:43:45.402805219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:45.405017 env[1307]: time="2025-07-10T00:43:45.404965679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:45.405326 env[1307]: time="2025-07-10T00:43:45.405284102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:43:45.406689 env[1307]: time="2025-07-10T00:43:45.406621103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:43:45.408354 env[1307]: time="2025-07-10T00:43:45.408271778Z" level=info msg="CreateContainer within sandbox \"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:43:45.425229 env[1307]: time="2025-07-10T00:43:45.425174256Z" level=info msg="CreateContainer within sandbox \"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d208b4cdd0e77453cda55e91dd7dc4a4e83b0cc7ab10d67f04cc024f1647d366\"" Jul 10 00:43:45.425845 env[1307]: time="2025-07-10T00:43:45.425798718Z" level=info msg="StartContainer for \"d208b4cdd0e77453cda55e91dd7dc4a4e83b0cc7ab10d67f04cc024f1647d366\"" Jul 10 00:43:45.484933 env[1307]: time="2025-07-10T00:43:45.484884675Z" level=info msg="StartContainer for \"d208b4cdd0e77453cda55e91dd7dc4a4e83b0cc7ab10d67f04cc024f1647d366\" returns successfully" Jul 10 00:43:45.730952 kubelet[2121]: E0710 00:43:45.730913 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:46.081315 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:43080.service. Jul 10 00:43:46.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.99:22-10.0.0.1:43080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:46.082626 kernel: kauditd_printk_skb: 40 callbacks suppressed Jul 10 00:43:46.082811 kernel: audit: type=1130 audit(1752108226.080:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.99:22-10.0.0.1:43080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:46.124000 audit[4799]: USER_ACCT pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.125172 sshd[4799]: Accepted publickey for core from 10.0.0.1 port 43080 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:46.144453 sshd[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:46.126000 audit[4799]: CRED_ACQ pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.148572 systemd-logind[1287]: New session 10 of user core. Jul 10 00:43:46.149050 systemd[1]: Started session-10.scope. Jul 10 00:43:46.151419 kernel: audit: type=1101 audit(1752108226.124:433): pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.151547 kernel: audit: type=1103 audit(1752108226.126:434): pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.151572 kernel: audit: type=1006 audit(1752108226.126:435): pid=4799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 10 00:43:46.153766 kernel: audit: type=1300 audit(1752108226.126:435): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec4e945d0 a2=3 a3=0 items=0 ppid=1 pid=4799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.126000 audit[4799]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec4e945d0 a2=3 a3=0 items=0 ppid=1 pid=4799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.126000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:46.158739 kernel: audit: type=1327 audit(1752108226.126:435): proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:46.158784 kernel: audit: type=1105 audit(1752108226.153:436): pid=4799 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.153000 audit[4799]: USER_START pid=4799 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.162715 kernel: audit: type=1103 audit(1752108226.155:437): pid=4802 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.155000 audit[4802]: CRED_ACQ pid=4802 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.386894 kernel: audit: type=1325 audit(1752108226.379:438): table=filter:124 family=2 entries=10 op=nft_register_rule pid=4813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:46.387025 kernel: audit: type=1106 audit(1752108226.381:439): pid=4799 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.379000 audit[4813]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=4813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:46.381000 audit[4799]: USER_END pid=4799 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.380345 sshd[4799]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:46.383694 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:43080.service: Deactivated successfully. Jul 10 00:43:46.384435 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:43:46.384907 systemd-logind[1287]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:43:46.385861 systemd-logind[1287]: Removed session 10. Jul 10 00:43:46.381000 audit[4799]: CRED_DISP pid=4799 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:46.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.99:22-10.0.0.1:43080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:46.379000 audit[4813]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc356fad10 a2=0 a3=7ffc356facfc items=0 ppid=2287 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:46.390000 audit[4813]: NETFILTER_CFG table=nat:125 family=2 entries=32 op=nft_register_rule pid=4813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:46.390000 audit[4813]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffc356fad10 a2=0 a3=7ffc356facfc items=0 ppid=2287 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:46.736700 kubelet[2121]: E0710 00:43:46.736584 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:43:46.773100 kubelet[2121]: I0710 00:43:46.770007 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-646c7495cd-vm5gv" podStartSLOduration=27.907908807 podStartE2EDuration="33.76998296s" podCreationTimestamp="2025-07-10 00:43:13 +0000 UTC" firstStartedPulling="2025-07-10 00:43:39.544314741 +0000 UTC m=+40.334694615" lastFinishedPulling="2025-07-10 00:43:45.406388894 +0000 UTC m=+46.196768768" observedRunningTime="2025-07-10 00:43:45.740721911 +0000 UTC m=+46.531101765" watchObservedRunningTime="2025-07-10 00:43:46.76998296 +0000 UTC m=+47.560362824" Jul 10 00:43:46.786000 audit[4817]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:46.786000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc0b35abb0 a2=0 a3=7ffc0b35ab9c items=0 ppid=2287 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:46.790000 audit[4817]: NETFILTER_CFG table=nat:127 family=2 entries=36 op=nft_register_chain pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:46.790000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffc0b35abb0 a2=0 a3=7ffc0b35ab9c items=0 ppid=2287 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:46.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:47.761507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579723397.mount: Deactivated successfully. Jul 10 00:43:50.479693 env[1307]: time="2025-07-10T00:43:50.479608157Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:50.637424 env[1307]: time="2025-07-10T00:43:50.637344789Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:50.674615 env[1307]: time="2025-07-10T00:43:50.674541466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:50.722026 env[1307]: time="2025-07-10T00:43:50.721955627Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:50.722716 env[1307]: time="2025-07-10T00:43:50.722648779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:43:50.724229 env[1307]: time="2025-07-10T00:43:50.724174255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:43:50.725669 env[1307]: time="2025-07-10T00:43:50.725613709Z" level=info msg="CreateContainer within sandbox \"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:43:51.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.99:22-10.0.0.1:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:51.383934 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:58758.service. Jul 10 00:43:51.385852 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 10 00:43:51.400114 kernel: audit: type=1130 audit(1752108231.383:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.99:22-10.0.0.1:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:51.428000 audit[4820]: USER_ACCT pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.429336 sshd[4820]: Accepted publickey for core from 10.0.0.1 port 58758 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:51.433777 kernel: audit: type=1101 audit(1752108231.428:446): pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.433934 kernel: audit: type=1103 audit(1752108231.433:447): pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.433000 audit[4820]: CRED_ACQ pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.434299 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:51.443118 kernel: audit: type=1006 audit(1752108231.433:448): pid=4820 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 10 00:43:51.442930 systemd[1]: Started session-11.scope. Jul 10 00:43:51.433000 audit[4820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfd5ac150 a2=3 a3=0 items=0 ppid=1 pid=4820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:51.444007 systemd-logind[1287]: New session 11 of user core. Jul 10 00:43:51.447865 kernel: audit: type=1300 audit(1752108231.433:448): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfd5ac150 a2=3 a3=0 items=0 ppid=1 pid=4820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:51.433000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:51.458086 kernel: audit: type=1327 audit(1752108231.433:448): proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:51.458197 kernel: audit: type=1105 audit(1752108231.451:449): pid=4820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.451000 audit[4820]: USER_START pid=4820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.462421 kernel: audit: type=1103 audit(1752108231.452:450): pid=4823 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.452000 audit[4823]: CRED_ACQ pid=4823 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.466919 env[1307]: time="2025-07-10T00:43:51.466858833Z" level=info msg="CreateContainer within sandbox \"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4d11869d8adbfd1d5ebed58157b97c4fadd763609efdffc594d25ee233031bc8\"" Jul 10 00:43:51.467883 env[1307]: time="2025-07-10T00:43:51.467840160Z" level=info msg="StartContainer for \"4d11869d8adbfd1d5ebed58157b97c4fadd763609efdffc594d25ee233031bc8\"" Jul 10 00:43:51.806823 env[1307]: time="2025-07-10T00:43:51.806765357Z" level=info msg="StartContainer for \"4d11869d8adbfd1d5ebed58157b97c4fadd763609efdffc594d25ee233031bc8\" returns successfully" Jul 10 00:43:51.822254 kubelet[2121]: I0710 00:43:51.822005 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-5prrt" podStartSLOduration=28.597723793 podStartE2EDuration="36.821986117s" podCreationTimestamp="2025-07-10 00:43:15 +0000 UTC" firstStartedPulling="2025-07-10 00:43:42.499545868 +0000 UTC m=+43.289925742" lastFinishedPulling="2025-07-10 00:43:50.723808182 +0000 UTC m=+51.514188066" observedRunningTime="2025-07-10 00:43:51.821622619 +0000 UTC m=+52.612002483" watchObservedRunningTime="2025-07-10 00:43:51.821986117 +0000 UTC m=+52.612365981" Jul 10 00:43:51.837649 systemd[1]: run-containerd-runc-k8s.io-4d11869d8adbfd1d5ebed58157b97c4fadd763609efdffc594d25ee233031bc8-runc.LiZGpd.mount: Deactivated successfully. Jul 10 00:43:51.846000 audit[4883]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:51.851072 kernel: audit: type=1325 audit(1752108231.846:451): table=filter:128 family=2 entries=10 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:51.846000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc7dd8c270 a2=0 a3=7ffc7dd8c25c items=0 ppid=2287 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:51.860204 kernel: audit: type=1300 audit(1752108231.846:451): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc7dd8c270 a2=0 a3=7ffc7dd8c25c items=0 ppid=2287 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:51.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:51.860000 audit[4883]: NETFILTER_CFG table=nat:129 family=2 entries=24 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:43:51.860000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc7dd8c270 a2=0 a3=7ffc7dd8c25c items=0 ppid=2287 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:51.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:43:51.878753 sshd[4820]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:51.881545 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:58772.service. Jul 10 00:43:51.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.99:22-10.0.0.1:58772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:51.881000 audit[4820]: USER_END pid=4820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.881000 audit[4820]: CRED_DISP pid=4820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:51.884605 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:58758.service: Deactivated successfully. Jul 10 00:43:51.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.99:22-10.0.0.1:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:51.886119 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:43:51.886130 systemd-logind[1287]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:43:51.887458 systemd-logind[1287]: Removed session 11. Jul 10 00:43:52.189000 audit[4890]: USER_ACCT pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:52.190825 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 58772 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:52.191000 audit[4890]: CRED_ACQ pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:52.191000 audit[4890]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8f41c350 a2=3 a3=0 items=0 ppid=1 pid=4890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:52.191000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:52.192037 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:52.195889 systemd-logind[1287]: New session 12 of user core. Jul 10 00:43:52.196604 systemd[1]: Started session-12.scope. Jul 10 00:43:52.199000 audit[4890]: USER_START pid=4890 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:52.201000 audit[4900]: CRED_ACQ pid=4900 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.99:22-10.0.0.1:58788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:53.136001 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:58788.service. Jul 10 00:43:53.173601 sshd[4890]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:53.174000 audit[4890]: USER_END pid=4890 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.174000 audit[4890]: CRED_DISP pid=4890 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.176979 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:58772.service: Deactivated successfully. Jul 10 00:43:53.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.99:22-10.0.0.1:58772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:53.177955 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:43:53.179106 systemd-logind[1287]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:43:53.180744 systemd-logind[1287]: Removed session 12. Jul 10 00:43:53.215000 audit[4931]: USER_ACCT pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.216744 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:53.217000 audit[4931]: CRED_ACQ pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.217000 audit[4931]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf20781f0 a2=3 a3=0 items=0 ppid=1 pid=4931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:53.217000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:53.227000 audit[4931]: USER_START pid=4931 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.229000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.222536 systemd-logind[1287]: New session 13 of user core. Jul 10 00:43:53.218185 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:53.223537 systemd[1]: Started session-13.scope. Jul 10 00:43:53.458542 sshd[4931]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:53.458000 audit[4931]: USER_END pid=4931 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.458000 audit[4931]: CRED_DISP pid=4931 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:53.461135 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:58788.service: Deactivated successfully. Jul 10 00:43:53.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.99:22-10.0.0.1:58788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:53.464924 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:43:53.465623 systemd-logind[1287]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:43:53.466390 systemd-logind[1287]: Removed session 13. Jul 10 00:43:55.757262 env[1307]: time="2025-07-10T00:43:55.757184829Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:55.759614 env[1307]: time="2025-07-10T00:43:55.759545329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:55.763002 env[1307]: time="2025-07-10T00:43:55.762958452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:55.765268 env[1307]: time="2025-07-10T00:43:55.765227688Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:55.765935 env[1307]: time="2025-07-10T00:43:55.765901005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:43:55.766988 env[1307]: time="2025-07-10T00:43:55.766964227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:43:55.785707 env[1307]: time="2025-07-10T00:43:55.785525728Z" level=info msg="CreateContainer within sandbox \"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:43:55.802566 env[1307]: time="2025-07-10T00:43:55.802507230Z" level=info msg="CreateContainer within sandbox \"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"10a1bcef950bd18ef98a99dad7ba729149f590a413a37c6ee97b7723f1663d9b\"" Jul 10 00:43:55.803029 env[1307]: time="2025-07-10T00:43:55.803006895Z" level=info msg="StartContainer for \"10a1bcef950bd18ef98a99dad7ba729149f590a413a37c6ee97b7723f1663d9b\"" Jul 10 00:43:55.858976 env[1307]: time="2025-07-10T00:43:55.858934485Z" level=info msg="StartContainer for \"10a1bcef950bd18ef98a99dad7ba729149f590a413a37c6ee97b7723f1663d9b\" returns successfully" Jul 10 00:43:56.844416 kubelet[2121]: I0710 00:43:56.844330 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d58fb4f9-n2p7m" podStartSLOduration=27.881953041 podStartE2EDuration="40.844304026s" podCreationTimestamp="2025-07-10 00:43:16 +0000 UTC" firstStartedPulling="2025-07-10 00:43:42.804385126 +0000 UTC m=+43.594764990" lastFinishedPulling="2025-07-10 00:43:55.766736091 +0000 UTC m=+56.557115975" observedRunningTime="2025-07-10 00:43:56.843497486 +0000 UTC m=+57.633877360" watchObservedRunningTime="2025-07-10 00:43:56.844304026 +0000 UTC m=+57.634683900" Jul 10 00:43:58.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.99:22-10.0.0.1:58800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:58.462044 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:58800.service. Jul 10 00:43:58.463183 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 10 00:43:58.463237 kernel: audit: type=1130 audit(1752108238.461:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.99:22-10.0.0.1:58800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:58.505000 audit[5019]: USER_ACCT pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.510736 kernel: audit: type=1101 audit(1752108238.505:475): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.510776 kernel: audit: type=1103 audit(1752108238.509:476): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.509000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.510855 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 58800 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:43:58.511144 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:43:58.521680 kernel: audit: type=1006 audit(1752108238.509:477): pid=5019 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 10 00:43:58.521820 kernel: audit: type=1300 audit(1752108238.509:477): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe14c08520 a2=3 a3=0 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:58.509000 audit[5019]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe14c08520 a2=3 a3=0 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:43:58.516167 systemd-logind[1287]: New session 14 of user core. Jul 10 00:43:58.516270 systemd[1]: Started session-14.scope. Jul 10 00:43:58.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:58.522000 audit[5019]: USER_START pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.528162 kernel: audit: type=1327 audit(1752108238.509:477): proctitle=737368643A20636F7265205B707269765D Jul 10 00:43:58.528203 kernel: audit: type=1105 audit(1752108238.522:478): pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.531465 kernel: audit: type=1103 audit(1752108238.524:479): pid=5022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.524000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:58.654193 env[1307]: time="2025-07-10T00:43:58.653706481Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:58.655577 env[1307]: time="2025-07-10T00:43:58.655471561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:58.657132 env[1307]: time="2025-07-10T00:43:58.657070264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:58.658357 env[1307]: time="2025-07-10T00:43:58.658329749Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:43:58.658805 env[1307]: time="2025-07-10T00:43:58.658741655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:43:58.661262 env[1307]: time="2025-07-10T00:43:58.661235637Z" level=info msg="CreateContainer within sandbox \"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:43:59.145425 env[1307]: time="2025-07-10T00:43:59.145362373Z" level=info msg="CreateContainer within sandbox \"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ca7b254193281d004daca25f5ae35860ebbeab44aacff7ba261c25b9dc996189\"" Jul 10 00:43:59.146371 env[1307]: time="2025-07-10T00:43:59.146329148Z" level=info msg="StartContainer for \"ca7b254193281d004daca25f5ae35860ebbeab44aacff7ba261c25b9dc996189\"" Jul 10 00:43:59.211031 env[1307]: time="2025-07-10T00:43:59.210874755Z" level=info msg="StartContainer for \"ca7b254193281d004daca25f5ae35860ebbeab44aacff7ba261c25b9dc996189\" returns successfully" Jul 10 00:43:59.360238 env[1307]: time="2025-07-10T00:43:59.360165305Z" level=info msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" Jul 10 00:43:59.372298 sshd[5019]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:59.372000 audit[5019]: USER_END pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:59.375220 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:58800.service: Deactivated successfully. Jul 10 00:43:59.376357 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:43:59.377007 systemd-logind[1287]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:43:59.377932 systemd-logind[1287]: Removed session 14. Jul 10 00:43:59.372000 audit[5019]: CRED_DISP pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:59.384616 kernel: audit: type=1106 audit(1752108239.372:480): pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:59.384779 kernel: audit: type=1104 audit(1752108239.372:481): pid=5019 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:43:59.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.99:22-10.0.0.1:58800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:43:59.429944 kubelet[2121]: I0710 00:43:59.429901 2121 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:43:59.430426 kubelet[2121]: I0710 00:43:59.430412 2121 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.436 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2307b87-39c7-43c6-8c91-1f74a3de69ab", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb", Pod:"calico-apiserver-646c7495cd-c8pph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0260d0fff8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.437 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.437 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" iface="eth0" netns="" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.437 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.437 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.497 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.497 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.498 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.682 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.682 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.684 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:59.688728 env[1307]: 2025-07-10 00:43:59.686 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.688728 env[1307]: time="2025-07-10T00:43:59.688703832Z" level=info msg="TearDown network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" successfully" Jul 10 00:43:59.689450 env[1307]: time="2025-07-10T00:43:59.688734240Z" level=info msg="StopPodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" returns successfully" Jul 10 00:43:59.689450 env[1307]: time="2025-07-10T00:43:59.689387076Z" level=info msg="RemovePodSandbox for \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" Jul 10 00:43:59.689505 env[1307]: time="2025-07-10T00:43:59.689431811Z" level=info msg="Forcibly stopping sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\"" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.719 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2307b87-39c7-43c6-8c91-1f74a3de69ab", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae075cb1155f157a152e419ffa710ae50fcc08d038cfec21a6624dd7bd5643eb", Pod:"calico-apiserver-646c7495cd-c8pph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0260d0fff8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.720 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.720 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" iface="eth0" netns="" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.720 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.720 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.744 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.744 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.744 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.750 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.750 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" HandleID="k8s-pod-network.fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Workload="localhost-k8s-calico--apiserver--646c7495cd--c8pph-eth0" Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.752 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:43:59.756428 env[1307]: 2025-07-10 00:43:59.753 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749" Jul 10 00:43:59.759078 env[1307]: time="2025-07-10T00:43:59.756450699Z" level=info msg="TearDown network for sandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" successfully" Jul 10 00:44:00.624385 env[1307]: time="2025-07-10T00:44:00.624307730Z" level=info msg="RemovePodSandbox \"fbf0cf9fa6d0c6053d3099dcf88a3639a4ca1669c8ed494f845a2ae43692f749\" returns successfully" Jul 10 00:44:00.624919 env[1307]: time="2025-07-10T00:44:00.624883458Z" level=info msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.189 [WARNING][5133] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lnpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986", Pod:"csi-node-driver-8lnpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali291442b6c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.189 [INFO][5133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.189 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" iface="eth0" netns="" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.189 [INFO][5133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.189 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.207 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.207 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.207 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.212 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.212 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.213 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.217478 env[1307]: 2025-07-10 00:44:01.215 [INFO][5133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.218341 env[1307]: time="2025-07-10T00:44:01.217511111Z" level=info msg="TearDown network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" successfully" Jul 10 00:44:01.218341 env[1307]: time="2025-07-10T00:44:01.217541910Z" level=info msg="StopPodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" returns successfully" Jul 10 00:44:01.218341 env[1307]: time="2025-07-10T00:44:01.218011786Z" level=info msg="RemovePodSandbox for \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" Jul 10 00:44:01.218341 env[1307]: time="2025-07-10T00:44:01.218038197Z" level=info msg="Forcibly stopping sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\"" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.280 [WARNING][5158] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lnpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d3fc6bb-5ba8-4c59-ab0f-83a157f847c1", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b512daf6e95a5e197c66ad13328c53c6864209b4665520020b7711b90deb986", Pod:"csi-node-driver-8lnpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali291442b6c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.280 [INFO][5158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.280 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" iface="eth0" netns="" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.280 [INFO][5158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.280 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.301 [INFO][5168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.301 [INFO][5168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.301 [INFO][5168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.444 [WARNING][5168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.444 [INFO][5168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" HandleID="k8s-pod-network.e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Workload="localhost-k8s-csi--node--driver--8lnpm-eth0" Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.446 [INFO][5168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.450268 env[1307]: 2025-07-10 00:44:01.448 [INFO][5158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502" Jul 10 00:44:01.451117 env[1307]: time="2025-07-10T00:44:01.450787995Z" level=info msg="TearDown network for sandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" successfully" Jul 10 00:44:01.650868 env[1307]: time="2025-07-10T00:44:01.650709784Z" level=info msg="RemovePodSandbox \"e56efb89889096958edad5af028560d47f8cf75f2483f4aaa0f34e504f318502\" returns successfully" Jul 10 00:44:01.651564 env[1307]: time="2025-07-10T00:44:01.651515451Z" level=info msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.682 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"fae28b13-a385-46eb-8a07-d49af21f8b28", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127", Pod:"calico-apiserver-646c7495cd-vm5gv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876d15320d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.683 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.683 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" iface="eth0" netns="" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.683 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.683 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.709 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.710 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.710 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.716 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.716 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.718 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.722505 env[1307]: 2025-07-10 00:44:01.720 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.723670 env[1307]: time="2025-07-10T00:44:01.722546407Z" level=info msg="TearDown network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" successfully" Jul 10 00:44:01.723670 env[1307]: time="2025-07-10T00:44:01.722578358Z" level=info msg="StopPodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" returns successfully" Jul 10 00:44:01.723670 env[1307]: time="2025-07-10T00:44:01.723122727Z" level=info msg="RemovePodSandbox for \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" Jul 10 00:44:01.723670 env[1307]: time="2025-07-10T00:44:01.723148777Z" level=info msg="Forcibly stopping sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\"" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.759 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0", GenerateName:"calico-apiserver-646c7495cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"fae28b13-a385-46eb-8a07-d49af21f8b28", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c7495cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f64bdfa010fefbe1b4c0c8dfc74de4eb5e35b6fa81f68d775030ccab412b9127", Pod:"calico-apiserver-646c7495cd-vm5gv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876d15320d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.759 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.759 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" iface="eth0" netns="" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.759 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.759 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.778 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.778 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.778 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.784 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.784 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" HandleID="k8s-pod-network.4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Workload="localhost-k8s-calico--apiserver--646c7495cd--vm5gv-eth0" Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.786 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.790856 env[1307]: 2025-07-10 00:44:01.788 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef" Jul 10 00:44:01.791331 env[1307]: time="2025-07-10T00:44:01.790887330Z" level=info msg="TearDown network for sandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" successfully" Jul 10 00:44:01.794497 env[1307]: time="2025-07-10T00:44:01.794471619Z" level=info msg="RemovePodSandbox \"4f1d602015e9c92ee4f88dc4908cbec6ecc9a5916832d0d32e6d78f9fe9a04ef\" returns successfully" Jul 10 00:44:01.795167 env[1307]: time="2025-07-10T00:44:01.795125126Z" level=info msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.829 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5prrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e01c7c4e-eca5-4812-95c5-200d95d24a32", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07", Pod:"goldmane-58fd7646b9-5prrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05471a71818", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.830 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.830 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" iface="eth0" netns="" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.830 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.830 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.853 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.853 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.853 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.859 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.859 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.861 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.864838 env[1307]: 2025-07-10 00:44:01.863 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.866216 env[1307]: time="2025-07-10T00:44:01.864869616Z" level=info msg="TearDown network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" successfully" Jul 10 00:44:01.866216 env[1307]: time="2025-07-10T00:44:01.864901697Z" level=info msg="StopPodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" returns successfully" Jul 10 00:44:01.866216 env[1307]: time="2025-07-10T00:44:01.865334594Z" level=info msg="RemovePodSandbox for \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" Jul 10 00:44:01.866216 env[1307]: time="2025-07-10T00:44:01.865364712Z" level=info msg="Forcibly stopping sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\"" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.908 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5prrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e01c7c4e-eca5-4812-95c5-200d95d24a32", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8fd248a8d53d967546ca7b6fb5fc08fd2a9d7a0eea032c2a0c5ee69c5d38c07", Pod:"goldmane-58fd7646b9-5prrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05471a71818", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.909 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.909 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" iface="eth0" netns="" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.909 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.909 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.934 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.934 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.934 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.941 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.941 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" HandleID="k8s-pod-network.10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Workload="localhost-k8s-goldmane--58fd7646b9--5prrt-eth0" Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.942 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:01.946961 env[1307]: 2025-07-10 00:44:01.945 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df" Jul 10 00:44:01.947635 env[1307]: time="2025-07-10T00:44:01.946992764Z" level=info msg="TearDown network for sandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" successfully" Jul 10 00:44:01.951072 env[1307]: time="2025-07-10T00:44:01.951031219Z" level=info msg="RemovePodSandbox \"10c57369d9900128f4c52c4d8e21c795a60a17e88abdbc303dd3dc44b74914df\" returns successfully" Jul 10 00:44:01.951596 env[1307]: time="2025-07-10T00:44:01.951572702Z" level=info msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:01.989 [WARNING][5292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c49502b-c641-4c73-b4e5-5955ec9166b1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561", Pod:"coredns-7c65d6cfc9-nbqqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c8505c1030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:01.989 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:01.989 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" iface="eth0" netns="" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:01.989 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:01.989 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.008 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.008 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.008 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.015 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.015 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.016 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.019872 env[1307]: 2025-07-10 00:44:02.018 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.020396 env[1307]: time="2025-07-10T00:44:02.019896804Z" level=info msg="TearDown network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" successfully" Jul 10 00:44:02.020396 env[1307]: time="2025-07-10T00:44:02.019929116Z" level=info msg="StopPodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" returns successfully" Jul 10 00:44:02.020460 env[1307]: time="2025-07-10T00:44:02.020433498Z" level=info msg="RemovePodSandbox for \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" Jul 10 00:44:02.020526 env[1307]: time="2025-07-10T00:44:02.020471190Z" level=info msg="Forcibly stopping sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\"" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.050 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c49502b-c641-4c73-b4e5-5955ec9166b1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7f26cfc29beb2c74009e399f5be0b68bd9486a1444ec9422b1c24c9d09cc561", Pod:"coredns-7c65d6cfc9-nbqqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c8505c1030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.050 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.050 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" iface="eth0" netns="" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.050 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.050 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.069 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.069 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.069 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.075 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.075 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" HandleID="k8s-pod-network.555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Workload="localhost-k8s-coredns--7c65d6cfc9--nbqqb-eth0" Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.077 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.080794 env[1307]: 2025-07-10 00:44:02.078 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b" Jul 10 00:44:02.081287 env[1307]: time="2025-07-10T00:44:02.080834034Z" level=info msg="TearDown network for sandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" successfully" Jul 10 00:44:02.084680 env[1307]: time="2025-07-10T00:44:02.084643109Z" level=info msg="RemovePodSandbox \"555976555fff74d8ab513bce7158bfe57d8c6937f08e5acbf611514164e8148b\" returns successfully" Jul 10 00:44:02.085367 env[1307]: time="2025-07-10T00:44:02.085312846Z" level=info msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.118 [WARNING][5349] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" WorkloadEndpoint="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.118 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.118 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" iface="eth0" netns="" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.118 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.118 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.142 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.143 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.143 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.148 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.148 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.149 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.153346 env[1307]: 2025-07-10 00:44:02.151 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.198106 env[1307]: time="2025-07-10T00:44:02.153907953Z" level=info msg="TearDown network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" successfully" Jul 10 00:44:02.198106 env[1307]: time="2025-07-10T00:44:02.153941777Z" level=info msg="StopPodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" returns successfully" Jul 10 00:44:02.198106 env[1307]: time="2025-07-10T00:44:02.154476477Z" level=info msg="RemovePodSandbox for \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" Jul 10 00:44:02.198106 env[1307]: time="2025-07-10T00:44:02.154501344Z" level=info msg="Forcibly stopping sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\"" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.182 [WARNING][5376] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" WorkloadEndpoint="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.182 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.182 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" iface="eth0" netns="" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.182 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.182 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.202 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.202 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.202 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.207 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.207 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" HandleID="k8s-pod-network.82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Workload="localhost-k8s-whisker--586d6954--lvt2n-eth0" Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.208 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.211773 env[1307]: 2025-07-10 00:44:02.210 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1" Jul 10 00:44:02.212682 env[1307]: time="2025-07-10T00:44:02.211796016Z" level=info msg="TearDown network for sandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" successfully" Jul 10 00:44:02.402783 env[1307]: time="2025-07-10T00:44:02.402716310Z" level=info msg="RemovePodSandbox \"82152fa5c78e604e8593d91f997b5759fdce5a512accebaf10dad577cc909ef1\" returns successfully" Jul 10 00:44:02.403319 env[1307]: time="2025-07-10T00:44:02.403274315Z" level=info msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.432 [WARNING][5402] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0", GenerateName:"calico-kube-controllers-d58fb4f9-", Namespace:"calico-system", SelfLink:"", UID:"b28fe5df-eb28-446d-ba1a-53a53e95c947", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d58fb4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8", Pod:"calico-kube-controllers-d58fb4f9-n2p7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa8f2648780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.432 [INFO][5402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.432 [INFO][5402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" iface="eth0" netns="" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.432 [INFO][5402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.432 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.450 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.450 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.450 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.455 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.455 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.457 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.461289 env[1307]: 2025-07-10 00:44:02.459 [INFO][5402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.461757 env[1307]: time="2025-07-10T00:44:02.461279361Z" level=info msg="TearDown network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" successfully" Jul 10 00:44:02.461757 env[1307]: time="2025-07-10T00:44:02.461322194Z" level=info msg="StopPodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" returns successfully" Jul 10 00:44:02.461925 env[1307]: time="2025-07-10T00:44:02.461900967Z" level=info msg="RemovePodSandbox for \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" Jul 10 00:44:02.462099 env[1307]: time="2025-07-10T00:44:02.461930513Z" level=info msg="Forcibly stopping sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\"" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.492 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0", GenerateName:"calico-kube-controllers-d58fb4f9-", Namespace:"calico-system", SelfLink:"", UID:"b28fe5df-eb28-446d-ba1a-53a53e95c947", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d58fb4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"433b94f9fa5e812f4e9c80c81bddb9fa5b81d37deb6ef0c89d811ad7f65206f8", Pod:"calico-kube-controllers-d58fb4f9-n2p7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa8f2648780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.492 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.492 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" iface="eth0" netns="" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.492 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.492 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.510 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.510 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.510 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.516 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.516 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" HandleID="k8s-pod-network.88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Workload="localhost-k8s-calico--kube--controllers--d58fb4f9--n2p7m-eth0" Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.517 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.521593 env[1307]: 2025-07-10 00:44:02.519 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd" Jul 10 00:44:02.522310 env[1307]: time="2025-07-10T00:44:02.521638719Z" level=info msg="TearDown network for sandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" successfully" Jul 10 00:44:02.525268 env[1307]: time="2025-07-10T00:44:02.525238525Z" level=info msg="RemovePodSandbox \"88cc1bedbdabd19109fa1f81268bcdd88c93ac3267e2f6748e69974d93c876cd\" returns successfully" Jul 10 00:44:02.525748 env[1307]: time="2025-07-10T00:44:02.525718541Z" level=info msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.556 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f04b4ccc-80be-498b-a53a-b961975a280d", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776", Pod:"coredns-7c65d6cfc9-hlrk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34eb155d9a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.556 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.556 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" iface="eth0" netns="" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.556 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.556 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.577 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.577 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.577 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.583 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.583 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.584 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.589364 env[1307]: 2025-07-10 00:44:02.587 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.590076 env[1307]: time="2025-07-10T00:44:02.590001061Z" level=info msg="TearDown network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" successfully" Jul 10 00:44:02.590076 env[1307]: time="2025-07-10T00:44:02.590051086Z" level=info msg="StopPodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" returns successfully" Jul 10 00:44:02.590642 env[1307]: time="2025-07-10T00:44:02.590611295Z" level=info msg="RemovePodSandbox for \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" Jul 10 00:44:02.590728 env[1307]: time="2025-07-10T00:44:02.590678894Z" level=info msg="Forcibly stopping sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\"" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.623 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f04b4ccc-80be-498b-a53a-b961975a280d", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865f5ab6c594bce3d2131eb52801032cdc86cbbca7a1e93dfd0bf5566119e776", Pod:"coredns-7c65d6cfc9-hlrk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34eb155d9a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.624 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.624 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" iface="eth0" netns="" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.624 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.624 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.645 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.645 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.645 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.651 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.651 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" HandleID="k8s-pod-network.09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Workload="localhost-k8s-coredns--7c65d6cfc9--hlrk9-eth0" Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.652 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:44:02.657677 env[1307]: 2025-07-10 00:44:02.655 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761" Jul 10 00:44:02.658297 env[1307]: time="2025-07-10T00:44:02.658215401Z" level=info msg="TearDown network for sandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" successfully" Jul 10 00:44:02.755684 env[1307]: time="2025-07-10T00:44:02.755541937Z" level=info msg="RemovePodSandbox \"09e5f5f221cc8e75d860ab2dace04bc37879e90a3aef982cf13d35f543e65761\" returns successfully" Jul 10 00:44:04.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.99:22-10.0.0.1:47752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:04.374782 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:47752.service. Jul 10 00:44:04.376160 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:44:04.376219 kernel: audit: type=1130 audit(1752108244.374:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.99:22-10.0.0.1:47752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:04.418000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.418962 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:04.420569 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:04.419000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.424156 systemd-logind[1287]: New session 15 of user core. Jul 10 00:44:04.424872 systemd[1]: Started session-15.scope. Jul 10 00:44:04.427498 kernel: audit: type=1101 audit(1752108244.418:484): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.427578 kernel: audit: type=1103 audit(1752108244.419:485): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.419000 audit[5501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff78aefa10 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:04.434768 kernel: audit: type=1006 audit(1752108244.419:486): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 10 00:44:04.434811 kernel: audit: type=1300 audit(1752108244.419:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff78aefa10 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:04.434842 kernel: audit: type=1327 audit(1752108244.419:486): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:04.419000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:04.428000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.441043 kernel: audit: type=1105 audit(1752108244.428:487): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.441088 kernel: audit: type=1103 audit(1752108244.429:488): pid=5504 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.429000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.603101 sshd[5501]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:04.603000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.605530 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:47752.service: Deactivated successfully. Jul 10 00:44:04.606492 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:44:04.606923 systemd-logind[1287]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:44:04.607565 systemd-logind[1287]: Removed session 15. Jul 10 00:44:04.603000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.611455 kernel: audit: type=1106 audit(1752108244.603:489): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.611595 kernel: audit: type=1104 audit(1752108244.603:490): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:04.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.99:22-10.0.0.1:47752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:06.486111 kubelet[2121]: I0710 00:44:06.486017 2121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8lnpm" podStartSLOduration=31.198445714 podStartE2EDuration="50.485995237s" podCreationTimestamp="2025-07-10 00:43:16 +0000 UTC" firstStartedPulling="2025-07-10 00:43:39.372182002 +0000 UTC m=+40.162561866" lastFinishedPulling="2025-07-10 00:43:58.659731525 +0000 UTC m=+59.450111389" observedRunningTime="2025-07-10 00:44:00.325708745 +0000 UTC m=+61.116088619" watchObservedRunningTime="2025-07-10 00:44:06.485995237 +0000 UTC m=+67.276375101" Jul 10 00:44:09.607134 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:45724.service. Jul 10 00:44:09.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.99:22-10.0.0.1:45724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:09.608500 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:44:09.608566 kernel: audit: type=1130 audit(1752108249.605:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.99:22-10.0.0.1:45724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:09.655000 audit[5538]: USER_ACCT pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.657849 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 45724 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:09.660243 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:09.658000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.664590 systemd-logind[1287]: New session 16 of user core. Jul 10 00:44:09.665515 systemd[1]: Started session-16.scope. Jul 10 00:44:09.665632 kernel: audit: type=1101 audit(1752108249.655:493): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.665680 kernel: audit: type=1103 audit(1752108249.658:494): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.665713 kernel: audit: type=1006 audit(1752108249.658:495): pid=5538 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 10 00:44:09.658000 audit[5538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeca8fbea0 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:09.671696 kernel: audit: type=1300 audit(1752108249.658:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeca8fbea0 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:09.671782 kernel: audit: type=1327 audit(1752108249.658:495): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:09.658000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:09.672961 kernel: audit: type=1105 audit(1752108249.670:496): pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.670000 audit[5538]: USER_START pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.671000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.680231 kernel: audit: type=1103 audit(1752108249.671:497): pid=5541 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.846609 sshd[5538]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:09.846000 audit[5538]: USER_END pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.852005 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:45724.service: Deactivated successfully. Jul 10 00:44:09.846000 audit[5538]: CRED_DISP pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.853103 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:44:09.853817 systemd-logind[1287]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:44:09.854702 systemd-logind[1287]: Removed session 16. Jul 10 00:44:09.856769 kernel: audit: type=1106 audit(1752108249.846:498): pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.856834 kernel: audit: type=1104 audit(1752108249.846:499): pid=5538 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:09.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.99:22-10.0.0.1:45724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:11.325101 kubelet[2121]: E0710 00:44:11.325050 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:44:14.850058 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:45728.service. Jul 10 00:44:14.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.99:22-10.0.0.1:45728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:14.851134 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:44:14.851195 kernel: audit: type=1130 audit(1752108254.848:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.99:22-10.0.0.1:45728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:14.908000 audit[5574]: USER_ACCT pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.910068 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 45728 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:14.912245 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:14.910000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.915911 systemd-logind[1287]: New session 17 of user core. Jul 10 00:44:14.916971 systemd[1]: Started session-17.scope. Jul 10 00:44:14.918145 kernel: audit: type=1101 audit(1752108254.908:502): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.918201 kernel: audit: type=1103 audit(1752108254.910:503): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.918225 kernel: audit: type=1006 audit(1752108254.910:504): pid=5574 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 10 00:44:14.910000 audit[5574]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8053750 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:14.924286 kernel: audit: type=1300 audit(1752108254.910:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8053750 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:14.924434 kernel: audit: type=1327 audit(1752108254.910:504): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:14.910000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:14.925556 kernel: audit: type=1105 audit(1752108254.920:505): pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.920000 audit[5574]: USER_START pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.929499 kernel: audit: type=1103 audit(1752108254.921:506): pid=5577 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:14.921000 audit[5577]: CRED_ACQ pid=5577 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.079466 sshd[5574]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:15.082080 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:45740.service. Jul 10 00:44:15.079000 audit[5574]: USER_END pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.087119 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:45728.service: Deactivated successfully. Jul 10 00:44:15.087684 kernel: audit: type=1106 audit(1752108255.079:507): pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.080000 audit[5574]: CRED_DISP pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.088262 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:44:15.088846 systemd-logind[1287]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:44:15.090395 systemd-logind[1287]: Removed session 17. Jul 10 00:44:15.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.99:22-10.0.0.1:45740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:15.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.99:22-10.0.0.1:45728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:15.092789 kernel: audit: type=1104 audit(1752108255.080:508): pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.124000 audit[5586]: USER_ACCT pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.127733 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 45740 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:15.126000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.126000 audit[5586]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff38ab1d20 a2=3 a3=0 items=0 ppid=1 pid=5586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:15.126000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:15.128226 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:15.133001 systemd-logind[1287]: New session 18 of user core. Jul 10 00:44:15.133905 systemd[1]: Started session-18.scope. Jul 10 00:44:15.138000 audit[5586]: USER_START pid=5586 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.140000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.916903 sshd[5586]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:15.916000 audit[5586]: USER_END pid=5586 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.916000 audit[5586]: CRED_DISP pid=5586 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.919272 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:45744.service. Jul 10 00:44:15.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.99:22-10.0.0.1:45744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:15.921511 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:45740.service: Deactivated successfully. Jul 10 00:44:15.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.99:22-10.0.0.1:45740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:15.922500 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:44:15.922525 systemd-logind[1287]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:44:15.923474 systemd-logind[1287]: Removed session 18. Jul 10 00:44:15.960000 audit[5604]: USER_ACCT pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.962735 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 45744 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:15.962000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.962000 audit[5604]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebd08aac0 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:15.962000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:15.964280 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:15.968723 systemd-logind[1287]: New session 19 of user core. Jul 10 00:44:15.969882 systemd[1]: Started session-19.scope. Jul 10 00:44:15.973000 audit[5604]: USER_START pid=5604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:15.974000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:17.243000 audit[5641]: NETFILTER_CFG table=filter:130 family=2 entries=9 op=nft_register_rule pid=5641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:17.243000 audit[5641]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe0fffc250 a2=0 a3=7ffe0fffc23c items=0 ppid=2287 pid=5641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:17.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:17.252000 audit[5641]: NETFILTER_CFG table=nat:131 family=2 entries=31 op=nft_register_chain pid=5641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:17.252000 audit[5641]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe0fffc250 a2=0 a3=7ffe0fffc23c items=0 ppid=2287 pid=5641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:17.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:18.355000 audit[5643]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:18.355000 audit[5643]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffca1810a90 a2=0 a3=7ffca1810a7c items=0 ppid=2287 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:18.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:18.375000 audit[5604]: USER_END pid=5604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:18.377811 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:45750.service. Jul 10 00:44:18.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.99:22-10.0.0.1:45750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:18.377000 audit[5604]: CRED_DISP pid=5604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:18.375260 sshd[5604]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:18.381001 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:45744.service: Deactivated successfully. Jul 10 00:44:18.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.99:22-10.0.0.1:45744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:18.382892 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:44:18.383627 systemd-logind[1287]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:44:18.380000 audit[5643]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=5643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:18.380000 audit[5643]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffca1810a90 a2=0 a3=0 items=0 ppid=2287 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:18.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:18.385749 systemd-logind[1287]: Removed session 19. Jul 10 00:44:18.400000 audit[5649]: NETFILTER_CFG table=filter:134 family=2 entries=32 op=nft_register_rule pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:18.400000 audit[5649]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc2e04cec0 a2=0 a3=7ffc2e04ceac items=0 ppid=2287 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:18.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:18.407000 audit[5649]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:18.407000 audit[5649]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc2e04cec0 a2=0 a3=0 items=0 ppid=2287 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:18.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:18.428000 audit[5644]: USER_ACCT pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:18.430433 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 45750 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:18.429000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:18.429000 audit[5644]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2b930890 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:18.429000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:18.431581 sshd[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:18.435643 systemd-logind[1287]: New session 20 of user core. Jul 10 00:44:18.436427 systemd[1]: Started session-20.scope. Jul 10 00:44:18.438000 audit[5644]: USER_START pid=5644 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:18.440000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.046060 sshd[5644]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:19.045000 audit[5644]: USER_END pid=5644 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.045000 audit[5644]: CRED_DISP pid=5644 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.049393 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:45756.service. Jul 10 00:44:19.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.99:22-10.0.0.1:45756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:19.050250 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:45750.service: Deactivated successfully. Jul 10 00:44:19.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.99:22-10.0.0.1:45750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:19.052695 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:44:19.053304 systemd-logind[1287]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:44:19.054602 systemd-logind[1287]: Removed session 20. Jul 10 00:44:19.095000 audit[5658]: USER_ACCT pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.097498 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 45756 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:19.096000 audit[5658]: CRED_ACQ pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.096000 audit[5658]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9ff9a020 a2=3 a3=0 items=0 ppid=1 pid=5658 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:19.096000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:19.098518 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:19.103005 systemd-logind[1287]: New session 21 of user core. Jul 10 00:44:19.103597 systemd[1]: Started session-21.scope. Jul 10 00:44:19.106000 audit[5658]: USER_START pid=5658 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.108000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.219942 sshd[5658]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:19.219000 audit[5658]: USER_END pid=5658 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.219000 audit[5658]: CRED_DISP pid=5658 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:19.223239 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:45756.service: Deactivated successfully. Jul 10 00:44:19.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.99:22-10.0.0.1:45756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:19.224948 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:44:19.225536 systemd-logind[1287]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:44:19.227188 systemd-logind[1287]: Removed session 21. Jul 10 00:44:24.222923 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:53984.service. Jul 10 00:44:24.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.99:22-10.0.0.1:53984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:24.242044 kernel: kauditd_printk_skb: 63 callbacks suppressed Jul 10 00:44:24.242151 kernel: audit: type=1130 audit(1752108264.222:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.99:22-10.0.0.1:53984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:24.291000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.293329 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:24.295000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.296779 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:24.299828 kernel: audit: type=1101 audit(1752108264.291:553): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.300034 kernel: audit: type=1103 audit(1752108264.295:554): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.302240 kernel: audit: type=1006 audit(1752108264.295:555): pid=5698 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 10 00:44:24.306064 kernel: audit: type=1300 audit(1752108264.295:555): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca509d870 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:24.295000 audit[5698]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca509d870 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:24.307528 kernel: audit: type=1327 audit(1752108264.295:555): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:24.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:24.313464 systemd[1]: Started session-22.scope. Jul 10 00:44:24.314727 systemd-logind[1287]: New session 22 of user core. Jul 10 00:44:24.323961 kernel: audit: type=1105 audit(1752108264.318:556): pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.318000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.331123 kernel: audit: type=1103 audit(1752108264.323:557): pid=5701 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.323000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.338798 kubelet[2121]: E0710 00:44:24.338735 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:44:24.450398 sshd[5698]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:24.450000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.452000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.491972 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:53984.service: Deactivated successfully. Jul 10 00:44:24.492724 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:44:24.493503 systemd-logind[1287]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:44:24.494379 systemd-logind[1287]: Removed session 22. Jul 10 00:44:24.494456 kernel: audit: type=1106 audit(1752108264.450:558): pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.494494 kernel: audit: type=1104 audit(1752108264.452:559): pid=5698 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:24.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.99:22-10.0.0.1:53984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:25.319822 kubelet[2121]: E0710 00:44:25.319786 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:44:26.857000 audit[5714]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:26.857000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd3792d7e0 a2=0 a3=7ffd3792d7cc items=0 ppid=2287 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:26.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:26.870000 audit[5714]: NETFILTER_CFG table=nat:137 family=2 entries=110 op=nft_register_chain pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:44:26.870000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffd3792d7e0 a2=0 a3=7ffd3792d7cc items=0 ppid=2287 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:26.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:44:29.453608 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:53994.service. Jul 10 00:44:29.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.99:22-10.0.0.1:53994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:29.454995 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 10 00:44:29.455042 kernel: audit: type=1130 audit(1752108269.453:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.99:22-10.0.0.1:53994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:29.491000 audit[5734]: USER_ACCT pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.492580 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:29.495203 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:29.494000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.499621 systemd-logind[1287]: New session 23 of user core. Jul 10 00:44:29.500456 systemd[1]: Started session-23.scope. Jul 10 00:44:29.501227 kernel: audit: type=1101 audit(1752108269.491:564): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.501409 kernel: audit: type=1103 audit(1752108269.494:565): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.501456 kernel: audit: type=1006 audit(1752108269.494:566): pid=5734 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 10 00:44:29.494000 audit[5734]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe473da900 a2=3 a3=0 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:29.508622 kernel: audit: type=1300 audit(1752108269.494:566): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe473da900 a2=3 a3=0 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:29.508794 kernel: audit: type=1327 audit(1752108269.494:566): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:29.494000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:29.505000 audit[5734]: USER_START pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.514267 kernel: audit: type=1105 audit(1752108269.505:567): pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.514315 kernel: audit: type=1103 audit(1752108269.507:568): pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.507000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.648588 sshd[5734]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:29.649000 audit[5734]: USER_END pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.651826 systemd-logind[1287]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:44:29.652751 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:53994.service: Deactivated successfully. Jul 10 00:44:29.649000 audit[5734]: CRED_DISP pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.653863 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:44:29.654789 systemd-logind[1287]: Removed session 23. Jul 10 00:44:29.658075 kernel: audit: type=1106 audit(1752108269.649:569): pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.658142 kernel: audit: type=1104 audit(1752108269.649:570): pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:29.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.99:22-10.0.0.1:53994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:34.652208 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:38872.service. Jul 10 00:44:34.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.99:22-10.0.0.1:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:34.653440 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:44:34.653504 kernel: audit: type=1130 audit(1752108274.651:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.99:22-10.0.0.1:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:34.692000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.693597 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:34.694624 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:34.693000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.699182 systemd-logind[1287]: New session 24 of user core. Jul 10 00:44:34.699887 systemd[1]: Started session-24.scope. Jul 10 00:44:34.700564 kernel: audit: type=1101 audit(1752108274.692:573): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.700621 kernel: audit: type=1103 audit(1752108274.693:574): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.700642 kernel: audit: type=1006 audit(1752108274.693:575): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 10 00:44:34.693000 audit[5748]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd67287f70 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:34.706684 kernel: audit: type=1300 audit(1752108274.693:575): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd67287f70 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:34.693000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:34.708026 kernel: audit: type=1327 audit(1752108274.693:575): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:34.708118 kernel: audit: type=1105 audit(1752108274.706:576): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.706000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.708000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.715499 kernel: audit: type=1103 audit(1752108274.708:577): pid=5751 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.816612 sshd[5748]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:34.816000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.819361 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:38872.service: Deactivated successfully. Jul 10 00:44:34.820517 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:44:34.821005 systemd-logind[1287]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:44:34.817000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.822604 systemd-logind[1287]: Removed session 24. Jul 10 00:44:34.825573 kernel: audit: type=1106 audit(1752108274.816:578): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.825664 kernel: audit: type=1104 audit(1752108274.817:579): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:34.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.99:22-10.0.0.1:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:39.322243 kubelet[2121]: E0710 00:44:39.322193 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:44:39.821101 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:37964.service. Jul 10 00:44:39.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.99:22-10.0.0.1:37964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:39.822576 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:44:39.822620 kernel: audit: type=1130 audit(1752108279.820:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.99:22-10.0.0.1:37964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:44:39.868000 audit[5786]: USER_ACCT pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.869125 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 37964 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:44:39.870231 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:44:39.869000 audit[5786]: CRED_ACQ pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.875038 systemd-logind[1287]: New session 25 of user core. Jul 10 00:44:39.875244 systemd[1]: Started session-25.scope. Jul 10 00:44:39.876956 kernel: audit: type=1101 audit(1752108279.868:582): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.877021 kernel: audit: type=1103 audit(1752108279.869:583): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.879229 kernel: audit: type=1006 audit(1752108279.869:584): pid=5786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 10 00:44:39.883586 kernel: audit: type=1300 audit(1752108279.869:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5d75810 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:39.883642 kernel: audit: type=1327 audit(1752108279.869:584): proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:39.869000 audit[5786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5d75810 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:44:39.869000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:44:39.884988 kernel: audit: type=1105 audit(1752108279.879:585): pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.879000 audit[5786]: USER_START pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.880000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:39.892723 kernel: audit: type=1103 audit(1752108279.880:586): pid=5789 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:40.047581 sshd[5786]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:40.048000 audit[5786]: USER_END pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:40.050322 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:37964.service: Deactivated successfully. Jul 10 00:44:40.051736 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:44:40.052376 systemd-logind[1287]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:44:40.053202 systemd-logind[1287]: Removed session 25. Jul 10 00:44:40.048000 audit[5786]: CRED_DISP pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:40.056828 kernel: audit: type=1106 audit(1752108280.048:587): pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:40.056880 kernel: audit: type=1104 audit(1752108280.048:588): pid=5786 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:44:40.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.99:22-10.0.0.1:37964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'